
On a recent warm fall evening at P&T Knitwear Bookstore, I had the pleasure of moderating a deep
conversation with two brilliant minds: Gary Marcus, AI expert and author of “Taming Silicon Valley,” and Raffi Krikorian, CTO of the Emerson Collective and host of the podcast
“Technically Optimistic.” Our discussion spanned the current and future role of AI in society, its impact on Gen Z, and the challenges of regulating AI in a profit-driven tech
industry.
We began by diving into Gen Z’s relationship with social media. They are the first generation growing up fully immersed in digital ecosystems, making them vulnerable to
manipulation, misinformation, and data harvesting on an unprecedented scale.
advertisement
advertisement
I posed the question: "What does the world look like for the next generation, especially in the context of AI?"
Marcus immediately set the tone by sharing a recent study that alarmed him, one conducted by Elizabeth Loftus. The study revealed how large language models could potentially implant false memories
in people—a particularly chilling revelation, given the scale at which these technologies are deployed. "That means the people who run the large language models have enormous power to manipulate
us, whether they choose to or not,” Marcus said.
The concern is not just about direct manipulation, but the lack of regulation, leaving society vulnerable. “In 15 years, we may
live in a world where companies control our views, much like what social media did—but worse," he warned.
Krikorian echoed the sentiment, likening the situation to a disturbing metaphor:
“It’s hard to tell the difference between clean water and sewage, because it’s all coming through the same pipe.” He highlighted the efforts of young activists like Sneha
Revanur at Encode Justice, who are rallying their peers to tell their stories and organize against the dark side of AI. "We might need to give them a few tools to pull it off, but I actually believe
in these young kids to lead us out of this world."
From there, we explored the ethical dilemmas that arise when profit-driven companies control technologies, leading to profound societal
implications. I offered a simple analogy: If water companies were profit-driven and realized they could sell sugar water at the same cost, yet people consumed more, wouldn’t they go for the
sugary option?
Krikorian acknowledged the complexity of the issue. “There aren’t bad people in the commercial sector,” he said, “but the incentives are super strong to
behave in a particular way because there’s a single metric you’re optimizing for: the bottom line.” Feeding people "broccoli" (the tough but necessary truth) is much harder when
candy (more engaging content) is so readily available.
The discussion then moved to the issue of AI regulation and governance. "The Biden administration did great work with their executive
order on AI," Marcus noted, "but it's limited. It requests information and monitoring, but doesn't introduce the laws we need."
He stressed the importance of pre-deployment testing for any AI
system, likening it to medical regulations: "You can’t release something to 100 million people without testing first to show that the benefits outweigh the risks." However, Congress, not the
administration, holds the legislative power to enact such policies.
In discussing global regulatory frameworks, Marcus pointed to the EU’s AI Act as a model to watch. Though not perfect,
it includes data privacy protections that we lack in the U.S. “Everyone complains about the EU’s GDPR because of the cookie buttons,” he said, “but it’s actually
protecting people’s privacy in ways that we should take note of.”
Toward the end of the conversation, we touched on the impact of AI on the future of work, another critical concern
for society.
While many fear that robots are coming to take our jobs, Marcus highlighted the nuances of the issue. "Certain sectors, like visual arts or voiceover work, have already been hit
hard,” he explained, “but others, like radiology, haven’t seen the job loss that was predicted."
The complexity of human tasks still protects many jobs -- for now. Krikorian
chimed in, pointing out that in the software industry, the loss of entry-level roles due to automation might lead to a generational gap in skills: “If all these junior engineers aren’t
needed, who will train the next generation?”
As we wrapped up, it became clear that while AI holds enormous potential for social good, we must address its risks head-on. "My worst
nightmare is no regulation of AI," Marcus confessed. "But my second worst fear is regulatory capture, where the big companies set the rules and stifle innovation from smaller players." His warning was
clear: if we don’t act now, we may find ourselves living in a world shaped by unregulated, profit-driven technologies, with little say in the matter.
Ultimately, the message of the night
was one of cautious optimism. Gen Z, with the right tools and awareness, could be the generation that leads us out of this AI wilderness, as long as society supports their efforts and insists on
responsible technology.