
Ars Technica
On Monday, Ars Technica hosted our Ars Frontiers virtual conference. In our fifth panel, we covered “The Lightning Onset of AI—What Suddenly Changed?” The panel featured a conversation with Paige Bailey, lead product manager for Generative Models at Google DeepMind, and Haiyan Zhang, general manager of Gaming AI at Xbox, moderated by Ars Technica’s AI reporter, Benj Edwards.
The panel originally streamed live, and you can now watch a recording of the entire event on YouTube. The “Lightning AI” part introduction begins at the 2:26:05 mark in the broadcast.
Ars Frontiers 2023 livestream recording.
With “AI” being a nebulous term, meaning different things in different contexts, we began the discussion by considering the definition of AI and what it means to the panelists. Bailey said, “I like to think of AI as helping derive patterns from data and use it to predict insights … it’s not anything more than just deriving insights from data and using it to make predictions and to make even more useful information.”
Zhang agreed, but from a video game angle, she also views AI as an evolving creative force. To her, AI is not just about analyzing, pattern-finding, and classifying data; it is also developing capabilities in creative language, image generation, and coding. Zhang believes this transformative power of AI can elevate and inspire human inventiveness, especially in video games, which she considers “the ultimate expression of human creativity.”
Next, we dove into the main question of the panel: What has changed that’s led to this new era of AI? Is it all just hype, perhaps based on the high visibility of ChatGPT, or have there been some major tech breakthroughs that brought us this new wave?

Ars Technica
Zhang pointed to the developments in AI techniques and the vast amounts of data now available for training: “We’ve seen breakthroughs in the model architecture for transformer models, as well as the recursive autoencoder models, and also the availability of large sets of data to then train these models and couple that with thirdly, the availability of hardware such as GPUs, MPUs to be able to really take the models to take the data and to be able to train them in new capabilities of compute.”
Bailey echoed these sentiments, adding a notable mention of open-source contributions, “We also have this vibrant community of open source tinkerers that are open sourcing models, models like LLaMA, fine-tuning them with very high-quality instruction tuning and RLHF datasets.”
When asked to elaborate on the significance of open source collaborations in accelerating AI advancements, Bailey mentioned the widespread use of open-source training models like PyTorch, Jax, and TensorFlow. She also affirmed the importance of sharing best practices, stating, “I certainly do think that this machine learning community is only in existence because people are sharing their ideas, their insights, and their code.”
When asked about Google’s plans for open source models, Bailey pointed to existing Google Research resources on GitHub and emphasized their partnership with Hugging Face, an online AI community. “I don’t want to give away anything that might be coming down the pipe,” she said.
Generative AI on game consoles, AI risks

Ars Technica
As part of a conversation about advances in AI hardware, we asked Zhang how long it would be before generative AI models could run locally on consoles. She said she was excited about the prospect and noted that a dual cloud-client configuration may come first: “I do think it will be a combination of working on the AI to be inferencing in the cloud and working in collaboration with local inference for us to bring to life the best player experiences.”
Bailey pointed to the progress of shrinking Meta’s LLaMA language model to run on mobile devices, hinting that a similar path forward might open up the possibility of running AI models on game consoles as well: “I would love to have a hyper-personalized large language model running on a mobile device, or running on my own game console, that can perhaps make a boss that is particularly gnarly for me to beat, but that might be easier for somebody else to beat.”
To follow up, we asked if a generative AI model runs locally on a smartphone, will that cut Google out of the equation? “I do think that there’s probably space for a variety of options,” said Bailey. “I think there should be options available for all of these things to coexist meaningfully.”
In discussing the social risks from AI systems, such as misinformation and deepfakes, both panelists said their respective companies were committed to responsible and ethical AI use. “At Google, we care very deeply about making sure that the models that we produce are responsible and behave as ethically as possible. And we actually incorporate our responsible AI team from day zero, whenever we train models from curating our data, making sure that the right pre-training mix is created,” Bailey explained.
Despite her earlier enthusiasm for open source and locally run AI models, Baily mentioned that API-based AI models that only run in the cloud might be safer overall: “I do think that there is significant risk for models to be misused in the hands of people that might not necessarily understand or be mindful of the risk. And that’s also part of the reason why sometimes it helps to prefer APIs as opposed to open source models.”
Like Bailey, Zhang also discussed Microsoft’s corporate approach to responsible AI, but she also remarked about gaming-specific ethics challenges, such as making sure that AI features are inclusive and accessible.
Audience questions about AGI, data set sourcing

Midjourney
Near the end of the session, Bailey and Zhang took questions from our audience, which were submitted through YouTube comments and selected by the moderator. Due to long answers and a lack of time, they only answered two, but the questions addressed popular concerns about AI.
First, referring to artificial general intelligence (AGI), which many define as an AI agent that could hypothetically perform any intellectual task a highly skilled human can do, someone asked, “How do we put fears of AGI to rest?”
Bailey highlighted the fact that while AGI is an “ill-defined” term, progress has been made in creating a model that can handle a broad range of tasks. “If we’re just talking about creating a generally useful model, we’re close to being there,” Bailey said. She expressed, however, that the type of AGI often depicted in science-fiction scenarios is “still a long way off, if ever, and is something that we should be mindful about as an industry and start building processes.”
Bailey emphasized the necessity of responsible AI features and safeguards, and urged the industry to work on clearer definitions of AGI to aid lawmakers in developing effective regulatory standards. She also addressed concerns about AI replacing jobs, asserting that AGI is more likely to enhance productivity and create new roles, likening its potential use to having “a little grad student as part of my own tiny research lab.”
Echoing Bailey’s sentiments, Zhang encouraged public dialogue on AI’s impact and role in society. She particularly focused on the tendency of people to anthropomorphize AI. Zhang used the example of the classic game Pac-Man to illustrate this point: “I mean, when you play Pac-Man, and those ghosts are chasing you, you are cursing those ghosts as if they were alive, you are projecting humaneness, personality onto these artificial beings, which are just rules-based algorithms, right?”

Zhang compared worries about AGI to our tendency to see intelligence in the ghosts in the 1980 arcade game Pac-Man.
In the final segment of the discussion, the panelists were asked if they had any concerns about AI models being trained using public data without the consent of creators—essentially, the scraping of Internet content to feed these powerful models.
Bailey provided an example of how her team at Google addresses this issue through their Bard project. They train their models on publicly available data and code, but have incorporated a concept of recitation checking into their tool. This means that if any generated code matches something in a public repository like GitHub, the model will provide a URL back to the source, thereby giving attribution to the author and specifying the license used for that code. This, according to Bailey, also aids in the discovery of new projects and functions that users may not have been aware of.
“I do think there are ways to introduce credit attribution and then also to be mindful and to only include data that the authors have explicitly listed under a permissive license for your pre-training mix,” Bailey said.
Zhang, in turn, cited Bing Chat as an example of a Microsoft AI product that cites public data. She says that Microsoft designed Bing’s AI system from the start to ensure that all information is grounded in actual Internet sources, attributing every answer to its original page or source: “I think that is core to how we think about product development with generative AI [at Microsoft], with these variations of GPT models, that we do think about that inclusion and making sure every creator, every contributor is supported and feels that their content is being respected.”
2023-05-24 23:31:03