More emerging concerns at OpenAI

OpenAI is at the forefront of AI technology, namely developing artificial general intelligence (AGI) — a form of AI that could perform any intellectual task that a human being can — pressing ethical and safety concerns emerge.

As reported by Kevin Roose of the New York Times, a group of both current and former OpenAI employees voiced concerns regarding the organization's focus and commitment to safe AI.

They argue that in the rush to make OpenAI the leader in the AGI race, the company's leadership might have sidestepped sufficient measures to prevent potential downsides or dangers of AI systems.

Daniel Kokotajlo, a former researcher in OpenAI's governance division and a lead voice in this group, believes that AGI will become a reality by 2027.

This accelerated timeline further underscores their worries, suggesting that safety needs to catch up to innovation and that safety has taken a backseat in the pursuit of growth and profit.

This sentiment from within one of the leading AI organizations sparks a crucial industry-wide conversation about the balance between innovation, profit, and ethical responsibility.

Also, this reporting from the same article is not a helpful tactic by any company: "A Google spokesman declined to comment."

Declining to comment is the ultimate irresponsible move, especially for a company as well-staffed and important as Google.

The promise and peril of AI technology present challenges and opportunities for C-suite executives and senior communications professionals. The challenge lies in integrating robust ethical considerations into strategic planning, particularly as these technologies can significantly disrupt market dynamics and societal norms. The opportunity, however, is to lead by example in setting global standards for responsible AI development that other companies could follow.

The concerns raised by this group serve as a crucial checkpoint for all stakeholders involved in AGI development. As companies venture into these uncharted territories, engaging in a substantive dialogue on aligning tech-driven ambitions with societal and ethical responsibilities is vital.

Enjoy the ride + plan accordingly.

-Marc

Can AI make VAR better?

The Premier League has been at the forefront of harnessing technology to make match days better for fans, players, and even referees, from utilizing video assistant referee (VAR) technology, which expedites decision-making processes and reduces game stoppages, to goal-line technology, which ensures quick and accurate goal determinations, minimizing delays caused by disputes. Additionally, wearable tracking devices on players provide real-time performance data to coaches and physios, enabling them to make faster tactical adjustments during matches.

Now AI-enabled 'force fields' to speed up offside calls in the Premier League.

Well, that is the hope anyway.

According to The Times, the Premier League will use this cutting-edge system, which harnesses artificial intelligence and "force fields," to streamline offside decisions for the upcoming season.

To reduce delays, the Premier League has struck a deal with Second Spectrum, an American software company owned by Genius Sports, to provide the technology for their semi-automated offside system.

Instead of relying on traditional limb-tracking methods, Second Spectrum's Dragon system captures 10,000 "surface mesh data points" per player, updating 200 times per second, ensuring unparalleled accuracy in determining offside positions.

The FIFA-approved system will replace the current Hawk-Eye system, which involves manually drawing lines on a screen for a video assistant referee (VAR), often leading to delays of two minutes or more.

Second Spectrum says thier AI technology will automatically detect when attackers are offside when the ball is kicked, generating accurate lines within seconds. An image will then be provided to the VAR, who will determine whether the attacker is interfering with play.

By utilizing "mesh" data, the system will effectively create an invisible "force field" around each player. When this "force field" is breached by a part of an attacker's body capable of scoring a goal, an offside message is triggered.

Premier League officials are confident that the semi-automated offside system will significantly reduce delays in offside decisions. The league is aiming for an average reduction of 31 seconds per call.

-Marc

AI hype vs. AI adoption

Generative AI catapulted into the public spotlight with the November 2022 launch of ChatGPT.

But the buzz has yet to meet the usage.

A recent Reuters Institute and Oxford University survey found that despite the "hype" surrounding AI, very few people use tools like ChatGPT regularly.

Of 12,000 respondents across six countries, including the UK, only 2% of Britons reported daily use.

However, the study revealed a generational divide, with young people aged 18-24 being the most enthusiastic adopters of generative AI tools.

These tools, capable of generating human-like text, images, audio, and video, are rapidly gaining traction among younger demographics, highlighting a potential shift in public interest toward AI technologies.