Tips

May 16, 2024

5 mins

AI Overview: Your Weekly AI Briefing

Hello Niuralogists!

Step into this week's edition as we navigate the dynamic realm of artificial intelligence to present you with the most recent breakthroughs. Our primary focus is to dissect the implications of these updates on various aspects of our lives, ranging from workplaces and businesses to policies and individual experiences. In this issue, we'll unveil compelling advancements, featuring highlights like Google's Gemini Updates and the Emergence of Sora's Rival, along with the Enhanced AI Interaction of GPT-4o, integrating text, audio, and vision capabilities.

For deeper insights, continue reading…

Google's Gemini Updates and Rival to Sora

At its I/O Developer’s Conference, Google announced major updates to its AI ecosystem, including enhancements to the Gemini model family and a new video generation model to rival OpenAI’s Sora. The Gemini 1.5 Pro now features a 2M context window and improved performance in code, logic, and image understanding, while the new Gemini 1.5 Flash is optimized for speed with a 1M token context window. Upcoming releases include Gemini 2 and the vision-language model PaliGemma. Advanced Gemini users can soon create custom personas called 'Gems' from text descriptions. Additionally, Google introduced Veo, a model that generates over 60-second, 1080p videos from prompts, and Imagen 3, an improved text-to-image model. The VideoFX tool, allowing scene-by-scene creation and music addition, is launching in the U.S. for select creators, with ImageFX available via a waitlist. These updates significantly enhance Gemini’s capabilities, positioning Google as a strong competitor to Sora.

GPT-4o Enhances AI Interaction with Integrated Text, Audio, and Vision Capabilities

There has been a recent report on OpenAI's new flagship model, GPT-4o. This model seamlessly integrates text, audio, and visual inputs and outputs, enhancing the naturalness of machine interactions. Unlike its predecessors, GPT-4o processes all inputs and outputs through a single neural network, retaining critical information and context. It outperforms previous models in response time, vision, and audio understanding, and is capable of complex tasks like harmonizing songs and real-time translations. GPT-4o also excels in non-English languages and sets new benchmarks in reasoning and translation. OpenAI has incorporated robust safety measures and conducted extensive external assessments to mitigate risks. Starting today, GPT-4o’s text and image capabilities are available in ChatGPT, with Voice Mode entering alpha testing soon. Developers can access GPT-4o via API, with expanded audio and video functionalities rolling out to trusted partners. OpenAI’s phased release strategy aims to ensure safety and usability, inviting community feedback.

GPT-4o

AlphaFold 3 Unveiled by Google DeepMind

Google DeepMind and Isomorphic Labs have unveiled AlphaFold 3, the latest iteration of the groundbreaking AI model renowned for its precise predictions of protein, DNA, and other molecular structures. Notable advancements include a 50% enhancement in forecasting drug-like interactions, surpassing conventional methods. While its predecessor, AlphaFold 2, focused solely on protein structures, AlphaFold 3 extends its capabilities to model and predict interactions across a broader spectrum of molecules. The model's accessibility has been widened through the introduction of the AlphaFold Server, enabling non-commercial users to leverage its predictive power for research purposes. Isomorphic Labs, affiliated with Google DeepMind, is already collaborating with pharmaceutical partners, leveraging AlphaFold 3 to expedite drug design processes. This development is poised to accelerate drug discovery efforts and deepen our understanding of the biological realm, building upon the transformative impact of previous AlphaFold iterations.

Intel's Aurora Reaches Exascale, Emerges as the Premier AI System in Speed

Intel, in collaboration with Argonne National Laboratory and Hewlett Packard Enterprise (HPE), announces the achievement of exascale computing with its Aurora supercomputer, making it the fastest AI-focused system to date. With speeds surpassing 1.012 exaflops, Aurora marks a significant milestone in scientific computing, facilitating breakthroughs across various disciplines. Equipped with Intel Data Center GPU Max Series and Xeon CPU Max Series processors, Aurora boasts unparalleled parallel processing capabilities, enabling researchers to leverage generative AI models and accelerate scientific discovery. The supercomputer's scale is impressive, comprising 166 racks, 10,624 compute blades, and 63,744 Intel Data Center GPU Max Series units, solidifying its position as the largest GPU cluster globally. Intel's commitment to advancing HPC and AI is further demonstrated through the expansion of its Tiber Developer Cloud, providing developers with state-of-the-art hardware and enhanced service capabilities for AI model evaluation and optimization on a large scale. This achievement underscores Intel's dedication to driving innovation and transformative discoveries in AI and HPC fields.

Credit: Argonne National Laboratory

Google Unveils Imagen 3: Their Most Advanced Text-to-Image Model 

Google has unveiled Imagen 3, its latest advancement in text-to-image modeling, promising superior detail, natural language comprehension, and text rendering. Available in a private preview for select creators via ImageFX, Imagen 3 is set to enhance the image generation experience with photorealistic results, fewer visual artifacts, and improved text incorporation. Douglas Eck, senior research director of Google DeepMind, highlighted Imagen 3's ability to understand nuanced prompts and remember small details in longer inputs during the company's I/O developer conference. This announcement follows the general availability of Imagen 2 on Vertex AI six months ago and the recent introduction of text-to-live capabilities in April. Despite facing criticism earlier this year, Google aims to maintain its competitive edge in the AI space, particularly against rivals with their own image generation tools like OpenAI's DALL-E and Meta's AI. Alongside Imagen 3, Google I/O has also featured the launch of Veo, a new video generation model.

Newsletter

📬 Receive our amazing posts straight to your inbox. Get the latest news, company insights, and Niural updates.

Thank you! Your message has been received!
Oops! Something went wrong. Please fill in the required fields and try again.

Q&Ai

What impact does attention offloading have on reducing the costs of LLM inference at scale?

A new study by researchers at Tsinghua University suggests that rearranging computations and hardware configurations for serving large language models (LLMs) can significantly reduce inference costs. The study introduces "attention offloading," a technique that leverages lower-priced GPUs for memory-intensive operations while reserving more expensive, compute-optimized accelerators for other tasks. With high-end AI accelerators being expensive and in high demand, attention offloading offers a cost-effective solution for companies serving LLMs at scale. By utilizing a heterogeneous architecture that optimizes different types of accelerators for specific aspects of LLM inference, attention offloading aligns resource demands with hardware strengths, resulting in higher performance and cost efficiency. The researchers developed Lamina, a distributed heterogeneous LLM inference system with attention offloading, which demonstrates significantly higher throughput per cost compared to existing solutions. This approach is expected to help companies reduce inference costs and capital expenditure on accelerators as LLMs become more prevalent.

How to improve the reliability of language models?

MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) researchers have devised a novel "consensus game" inspired by game theory to enhance the reliability of AI's text comprehension and generation capabilities. In this scenario, akin to a cooperative tabletop game, one part of the AI system generates sentences while the other evaluates them, akin to deciphering cryptic messages. By treating this interaction as a game where both components collaborate under defined rules to reach a consensus, the researchers observed significant enhancements in the AI's accuracy and coherence across various tasks, including reading comprehension and dialogue. Traditional language models often face challenges in reconciling conflicting scoring procedures between generative and discriminative querying methods. However, the consensus game approach, employing an equilibrium-ranking algorithm, effectively bridges this gap, resulting in improved model performance. This innovative methodology, showcased at the International Conference on Learning Representations (ICLR), marks a significant step forward in leveraging game-theoretic strategies to enhance language model reliability and consistency.

Tools

  • 🤝 Touch AI finds ideal business customers in seconds
  • 🔒 Facia prevents deepfakes with facial recognition tech
  • Synthflow AI Teams automates customer interactions with AI voice assistants
  • 📹 Krea AI Video generates an AI video from keyframe images
  • 🎨 Stable Artisan  is a multimodal media generation using the Stable Diffusion Discord

Follow us on Twitter and LinkedIn for more content on artificial intelligence, global payments, and compliance. Learn more about how Niural uses AI for global payments and team management to care for your company's most valuable resource: your people.

See you next week!

Request a demo