The AI industry is moving at breakneck speed, with new models and capabilities emerging almost daily. Despite a quieter news cycle this week, a significant issue has caught the attention of the industry and the public alike: AI ethics—or the lack thereof.
_______________________________________________________________
- AI ethics in the spotlight after child sexual abuse images found in the LAION dataset, used for training prominent AI image generators.
- Microsoft’s AI chatbot and Google’s AI services criticized for offensive and biased outputs, highlighting ongoing challenges in AI ethics.
- EU to enforce AI regulations with fines, while AI continues to advance with applications ranging from creative arts to environmental monitoring.
_______________________________________________________________
Ethical Concerns in AI Grow as Tech Advances: A Weekly Roundup
In a concerning revelation, the AI community was shaken by news that LAION, a dataset used to train many AI image generators such as Imagen and Stable Diffusion, contained thousands of images of suspected child sexual abuse.
Stanford Internet Observatory, alongside anti-abuse nonprofits, uncovered the illicit content and reported it to authorities. LAION has since removed the dataset to excise the offensive materials.
This situation highlights a disturbing trend: the rapid race to market AI products can sometimes overshadow ethical considerations.
Thanks to increasing no-code AI creation tools, it’s effortless nowadays to train models on whatever data you please. This democratization, while exciting, lowers the barrier to unleashing unvetted and potentially dangerous AIs.
The temptation to cut ethical corners is excellent when the speed of development directly impacts profits and stature. Shareholders care about market domination – not morals. With enough money on the line, ethics become an afterthought.
AI ethics is daunting, but it’s essential. It involves scrutinizing datasets, consulting stakeholders, and making difficult tradeoffs. Still, in the land grab for market share, companies throw caution and conscience to the wind, churning out AI systems with little thought of their impact.
Instances of unethical AI decisions are not new:
- Microsoft’s Bing Chat, now Microsoft Copilot, made inappropriate comparisons involving a journalist at its launch.
- Google’s Bard and ChatGPT got caught offering racist medical advice as recently as October.
- Anglocentric biases were observed in the latest OpenAI image generator iteration, DALL-E.
Though the EU is introducing regulations to enforce AI ethics with potential fines, the journey to entirely ethical AI is far from over.
For now, however, profits are being prioritized over people. Harmful AI will continue to spread as long as this is the case.
Society must demand more from these powerful technologies before the damage is irreversible.
Other noteworthy AI developments this week include:
- Microsoft’s chatbot Copilot can now compose original songs thanks to integrating with the music app Suno. But who owns the rights to AI-created art? The beats and bytes are sparking big debates.
- Rite Aid slapped with a 5-year facial recognition ban after “reckless” tech left customers feeling exposed. As unregulated AI identifies faces faster than ever, regulation calls grow louder.
- The EU expands its initiative to nourish homegrown AI startups with computing power for model training. Access to supercomputers aligns with the EU’s mission to advance AI responsibly.
- OpenAI grants its board veto power over new models to catch harmful AI pre-release. But some argue proper safety requires public, not corporate, guardrails.
- Wary of public backlash, most CIOs tread carefully with conversational AI like ChatGPT. Despite the hype, CIOs wisely approach generative AI with a cautious toe-in-the-water strategy.
- News outlets sued Google for allegedly stealing content to train AI models like Bard.
- OpenAI cut a deal with Axel Springer to legally access articles from Politico, Business Insider, and more to boost ChatGPT.
AI’s Ominous Gaze: Predicting Lives and Rewriting Reality
Danish researchers have developed an AI called life2vec that uses personal data to predict personality traits, interests, and even the time of death in a twist that seems ripped from the pages of science fiction.
Feed the algorithm details upbringing, education, work, health, hobbies, and more, and it purports to forecast how the arc of your life will bend. The study’s accuracy claims should be taken with a grain of salt. But its Orwellian implications cannot be ignored.
Insurance firms are salivating to gain probabilistic peeks into clients’ futures. However, ethical questions surround profiling and manipulating individuals based on AI augury. Will “precrime” enforcement be next?
Meanwhile, scientists at Carnegie Mellon built an AI chemist, Coscientist, who can handle the research drudgery like a champ. Think of it as a tireless, caffeine-free intern who can independently plan, design, and execute chemical reactions.
Powered by an LLM (think fancy language model like GPT-4, but for science), Coscientist is currently a whiz in chemistry. It scours research papers, identifies common reactions and ingredients, and then gets down to business, mixing and concocting like a real-life Dr. Frankenstein (minus the creepy castle and questionable ethics, hopefully).
“The moment I saw a non-organic intelligence be able to autonomously plan, design, and execute a chemical reaction that humans invented, that was amazing,” says lead researcher Gabe Gomes. “It was a ‘holy crap’ moment.”
FunSearch and StyleDrop: AI’s Playground of Math and Art
While Coscientist is busy playing lab rat, Google’s AI researchers have been having a field day in math and art.
FunSearch, which stands for Function Search, isn’t a search engine for kids. It’s a tool that helps mathematicians make discoveries, like having a brainy AI sidekick for your next groundbreaking equation.
FunSearch uses a clever “tag team” approach. One model comes up with the ideas, while the other acts as a critical evaluator, ensuring everything stays grounded in mathematical reality.
StyleDrop, on the other hand, is all about the aesthetics. Have you ever wanted to turn your photo into a watercolor masterpiece or give your cat portrait a Van Gogh twist?
StyleDrop lets you do just that and with impressive accuracy. Feed it an example of the style you’re after, and it’ll work its magic on any image you throw its way, even the alphabet (which is surprisingly tricky for AI models).
VideoPoet: From Text to Trippy Videos
Google’s AI ambitions continue beyond still images. VideoPoet is here to turn your text or pictures into moving masterpieces.
Imagine describing a scene from your dream to an AI and having it churn out a trippy video to match. That’s what VideoPoet does. As with any AI creation, the challenge is making the videos coherent and visually attractive.
We’re not talking Hollywood blockbusters here, but the results are undeniably intriguing.
From Snow to Bias: AI’s Practical and Perilous Sides
While some AI projects are busy writing haikus and painting cat portraits, others tackle real-world problems. Swiss researchers, for example, have developed a system that uses satellite data to estimate snow depth, a crucial factor for managing water resources and preventing avalanches.
This innovation shows how AI can be harnessed for practical applications that benefit entire communities.
However, a cautionary tale comes from Stanford researchers who found that AI models used in healthcare can perpetuate harmful biases.
These models, trained on biased medical data, can reinforce stereotypes and lead to unfair treatment for specific groups.
This highlights the importance of carefully scrutinizing AI systems to ensure they don’t exacerbate existing inequalities.
The message is clear: advanced AI is peering into the black box of fate while redefining the limits of science and art. But are we wise enough to control the genies we summoned from silicon lamps, or will they control us?
Are you intrigued by the unfolding narrative of AI and its implications for our world? For in-depth insights and the latest updates, subscribe to our newsletter,
Leave a Reply