Hello everyone, I’m writing to you today from Deer Valley, Utah, where Fortune is hosting its Brainstorm Tech conference. Not surprisingly, AI is a major theme at the event. Here’s a summary of the key information about AI presented so far:
In an interview with Federal Reserve Bank of San Francisco President Mary Daly on Monday, my colleague Emma Hinchliffe said that generative AI’s impact on the labor market will depend on how the technology is used. Daly said that we can expect generative AI to contribute at least to average productivity gains, which are currently 1.5% per year.
But she also said the potential impact on productivity gains would be much greater if AI helped invent new products and new processes, rather than simply automating what already exists. [AI] “The sad thing is, it was our fault,” Daly said, noting that every new technology to date has, in the long run, created more jobs than it lost, and he believes AI will be no exception.
Building on these ideas, Stanford economist Erik Brynjolfsson urged companies to view AI as a complement to human labor. While acknowledging that many companies struggle to figure out how to derive a reasonable return on investment from generative AI, Brynjolfsson said it’s important to stop thinking about jobs and start thinking about tasks. AI can automate some tasks within an organization, but it can’t automate the entire job (at least not yet). In fact, as automation lowers the costs associated with some roles, it may actually increase the demand for those roles, leading to more people being hired for those jobs (this is called Jevons’ paradox). Brynjolfsson is the co-founder of Workhelix, a company that helps companies do these task-based analyses and develop strategic plans to implement AI in the most effective way within their organizations. He said that tasks that are currently ideal for AI automation include many in software development and customer contact centers.
Robinhood CEO Vladimir Tenev told Fortune editor-in-chief Alison Shontell that he believes AI will democratize access to wealth management services: While the ultra-wealthy will continue to use human financial advisors, AI will be able to provide access to better financial advice to many people who previously couldn’t afford to hire one.
Agility Robotics CEO Peggy Johnson showed off the company’s humanoid robot, Digit, which is already working in warehouses as part of a multi-year contract with GXO Logistics. Johnson said Agility is currently integrating Digit with a large language model (LLM) so people can give it instructions in natural language. Johnson believes Digit and similar humanoid robots are needed to fill the shortage of about 1.1 million warehouse workers in the U.S.
Clara Shih, Head of AI at Salesforce, spoke about how to build trust in AI within large organizations. She touted Salesforce’s proprietary Einstein AI trust layer, which includes features like data security and guardrails to prevent harmful language from being generated, as well as techniques to defend against prompt injection attacks, where an adversary creates a prompt designed to trick the LLM into jumping a guardrail.
She also said the company will soon begin rolling out AI software that is more “agent-like” in nature. These are AI models that can perform tasks within a workflow rather than simply generating emails, letters or customer service dialogues. More broadly, Shih said, one way organizations can increase trust in AI is to make sure they’re using the right AI model for the problem at hand. Simply throwing a general-purpose large-scale language model at every business dilemma is unlikely to deliver the value companies expect from AI.
This morning I interviewed Jeff Dean, chief scientist at Google. He said that longer and longer context windows, like the ones Google is pushing with Gemini, will help curb AI hallucinations. But he also agreed with Microsoft’s Bill Gates’ recent comments that LLMs alone won’t get us to AGI, even if we continue to scale them up. Algorithmically, Dean agreed, other innovations will be needed.
There will be more AI discussions at Brainstorm Tech over the next few days, concluding around midday on Wednesday. You can watch the livestream here, archived sessions here, and coverage of many sessions on fortune.com.
Now for some more AI news.
Jeremy Kahn
JeremyKahn@fortune.com
Jeremiah Kahn
Before we get to the news… If you want to gain a deeper understanding of how AI can transform business and hear from Asia’s top business leaders about AI’s impact across industries, join us at Fortune Brainstorm AI Singapore. The event will be held on July 30-31 at the Ritz-Carlton, Singapore. Today is your last chance to register to attend. Hear from Alation CEO Satyen Sangani on the impact of AI on the digital transformation of GXS Bank in Singapore, Grab CTO Sutten Thomas Pradatheth on how quickly AI can be deployed across the Asia-Pacific region, Singapore’s Minister of Communications and Information Josephine Teo on the island nation’s efforts to become an AI superpower, and much more. Register here. We have a special code for Eye on AI readers to get 50% off your registration: BAI50JeremyK.
AI in the News
Yandex co-founder launches new AI infrastructure company in Europe. Arkady Volozh, co-founder of Russian tech conglomerate Yandex, is launching Nevius Group, an AI infrastructure company comprised mainly of former Yandex employees, the Financial Times reports. The move follows Yandex’s sale of core Russian assets due to the Ukraine war. Europe-based Nevius aims to develop a cloud computing platform for AI model training, partnering with a major European AI startup and setting up a data center in Finland.
OpenAI is reportedly training a new inference AI model, codenamed “Strawberry,” according to a Reuters article citing internal OpenAI documents obtained by the company. The model is meant to be an inference engine that will help future AI agents take action on the internet.
A new AI safety and security company backed by X.ai advisors and top adversarial AI researchers has come out of stealth. Called Gray Swan, the company was co-founded by Dan Hendricks, director of the Center for AI Safety and advisor to Elon Musk’s X.ai, and Matt Fredrickson, Zico Colter, and Andy Zhou, prominent AI researchers at Carnegie Mellon University who study how to attack large-scale language models. Gray Swan launched two products: an LLM that the company says is much more robust to attacks than other AI models, and a product that evaluates how a given LLM performs when subjected to various prompt injection attacks. Hendricks has been in the news lately as one of the leading proponents of California’s Senate Bill 1047, which would require companies building advanced AI models to take a variety of steps to prevent potential “catastrophic damage.” You can read more about Gray Swan’s debut on the company’s blog.
An investigation found that NVIDIA, Apple, Anthropic, and Salesforce used YouTube video transcripts to train AI models without Google’s permission. Wired, in collaboration with news outlet Proof News, found that companies including Anthropic, Apple, NVIDIA, and Salesforce used subtitles from more than 173,000 YouTube videos, in violation of YouTube’s rules against unauthorized data collection. Creators who uploaded videos to the Google-owned platform were unaware that their content was being used, and many of them sought compensation and regulation. However, many of the companies involved in using YouTube transcripts argued that their actions should have fallen under the “fair use” exception to copyright infringement claims.
The UK government is expected to introduce a landmark AI bill on Wednesday. The new Labour government will announce plans to enact new AI legislation in Wednesday’s “King’s Speech” (an annual address to Parliament by the monarch outlining the government’s legislative agenda), the Financial Times reported. According to the paper, the bill aims to create binding rules for the development of advanced AI models. Chancellor Rishi Sunak’s previous government placed more emphasis on voluntary initiatives by technology companies developing AI than on legal requirements.
Focus on AI research
Unable to compete with big tech companies and startups, universities are trying to find niche areas of AI research. For more than a decade, university computer science departments have lamented the exodus of top AI researchers and recent PhD graduates to tech companies that offer not only much higher salaries but also access to much larger and more expensive clusters of graphic processing units (the type of chips most commonly used in AI applications) and vast amounts of data for training AI models. This situation has only gotten worse in the current LLM era. According to a Wall Street Journal article, some universities are now trying to see if they can zigzag as other AI fields zigzag. Rather than encouraging AI researchers to take on LLMs, these universities are hiring academics to research entirely new algorithms, architectures and even in some cases hardware that require far fewer GPUs and less energy. In other cases, universities are using partnerships with big tech companies to gain access to GPUs, the paper said. Additionally, several universities are spending a lot of money to develop GPU clusters large enough to at least have the potential to provide what individual researchers have access to at places like OpenAI, Microsoft, Google, and Meta.
The fate of AI
Managers and employees have vastly different expectations about how much time AI can save them — Ryan Hogg
California AI bill SB-1047 sparks fierce debate, senator likens it to ‘Jets vs. Sharks’ fight — Sharon Goldman
OpenAI Launches New Metric to Track AI Progress. But Wait: Where is AGI? — Sharon Goldman
Early Amazon and Tesla investors predict NVIDIA’s market cap will soar to $50 trillion — Sasha Rogelberg
AI Calendar
July 21-27: International Conference on Machine Learning (ICML), Vienna, Austria
July 30-31: Fortune Brainstorm AI Singapore (Register here)
August 12-14: Ai4 2024 in Las Vegas
December 8-12: Neural Information Processing Systems (Neurips) 2024, Vancouver, British Columbia.
December 9-10: Fortune Brainstorm AI San Francisco (Register here)
Brain Food
J.D. Vance, Donald Trump’s running mate, is a leading advocate of open-source AI. Vance, who was a venture capitalist before becoming a senator from Ohio, has long touted the benefits of open-source AI models, The Information reported. Vance said in March that open-source AI models are the best defense against “woke AI” because they can be modified by users, potentially circumventing the guardrails that developers built into the models initially. He posted these comments on X in response to the controversy surrounding Google’s chatbot Gemini and its guardrails for generating videos from text. The guardrails were originally so strict about preventing potentially racist imagery that they failed to generate images of groups of white people, even when those images were historically accurate, such as a Nazi rally or a Viking feast.
Beyond Vance’s support for an open AI model, the Republican election manifesto also supports the idea of repealing President Joe Biden’s executive orders on AI, and states that the party will pursue a “pro-innovation” and anti-regulatory stance on AI technology. Trump’s campaign has also received campaign contributions from prominent “effective accelerationists” (e/accs) in Silicon Valley, who believe in the unrestrained development of AI because they believe the potential of AI technology far outweighs the potential harms. These include a16z’s Marc Andreessen and Ben Horowitz. However, Trump has also attracted support from billionaires Elon Musk and Peter Thiel, who are ambivalent about AI development and the potential existential risks of AI technology, but who generally support a libertarian approach to tech regulation.