GenAIPro https://www.webpronews.com/emergingtech/genaipro/ Breaking News in Tech, Search, Social, & Business Thu, 13 Feb 2025 07:10:03 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 https://i0.wp.com/www.webpronews.com/wp-content/uploads/2020/03/cropped-wpn_siteidentity-7.png?fit=32%2C32&ssl=1 GenAIPro https://www.webpronews.com/emergingtech/genaipro/ 32 32 138578674 OpenAI Is Simplifying Its Product Roadmap https://www.webpronews.com/openai-is-simplifying-its-product-roadmap/ Thu, 13 Feb 2025 12:30:00 +0000 https://www.webpronews.com/?p=611552 OpenAI is implementing a much-needed simplification of its roadmap, making it easier to understand what its AI models do.

As OpenAI has expanded and improved its AI models, the company has adopted a naming scheme that can leave even the most experienced tech user scratching their heads in bewilderment. In a post on X, CEO Sam Altman says the company is going to simplify its product roadmap.

Interestingly, Altman says the company will not ship its upcoming o3 model, instead incorporating o3 and other technologies in the upcoming GPT-5. In the interim, the company will release GPT-4.5, which Altman says will be its “last non-chain-of-thought model.”

In a reply to a comment, Altman revealed that users can expect GPT-4.5 in the coming weeks, while GPT-5 will be released in the coming months.

]]>
611552
AI Models Are Terrible At Relaying or Summarizing News https://www.webpronews.com/ai-models-are-terrible-at-relaying-or-summarizing-news/ Wed, 12 Feb 2025 22:10:20 +0000 https://www.webpronews.com/?p=611540 A new study has showed that the leading AI models are terrible at summarizing or relaying news stories, casting doubt on their role in journalistic applications.

The study was conducted by the BBC and looked OpenAI’s ChatGPT, Microsoft’s Copilot, Google’s Gemini, and Perplexity AI. The study involved giving the AI models content from the outlet’s website and then asking questions about the news stories.

Deborah Turness, CEO of BBC News and Current Affairs, detailed the study in a blog post.

Our researchers tested market-leading consumer AI tools – ChatGPT, Perplexity, Microsoft Copilot and Google Gemini – by giving them access to the BBC News website, and asked them to answer one hundred basic questions about the news, prompting them to use BBC News articles as sources.

Unfortunately, the results were less than encouraging.

The results? The team found ‘significant issues’ with just over half of the answers generated by the assistants.

The AI assistants introduced clear factual errors into around a fifth of answers they said had come from BBC material.

And where AI assistants included ‘quotations’ from BBC articles, more than one in ten had either been altered, or didn’t exist in the article.

Part of the problem appears to be that AI assistants do not discern between facts and opinion in news coverage; do not make a distinction between current and archive material; and tend to inject opinions into their answers.

The results they deliver can be a confused cocktail of all of these – a world away from the verified facts and clarity that we know consumers crave and deserve.

Why the Study Is Concerning

The BBC’s study is particularly disturbing in the context of the tech and news industries’ wholesale adoption of AI. Countless outlets have turned to AI “reporters” and “writers” in an effort to speed up production or cut costs.

Unfortunately, much of modern news writing involves writing about stories other outlets have already covered, while trying to put a different and unique spin on the news. As the study demonstrates, however, this is exactly the kind of task that AI is still ill-suited for, failing to distinguish between fact and opinion, hallucinating details, and making up quotes.

Companies looking to replace human workers with AI agents, especially in fields where accuracy matters, would do well to heed the BBC’s findings.

For more information, the study can be found here in its entirety.

]]>
611540
OpenAI & CSU to Provide ChatGPT Edu to Students & Staff https://www.webpronews.com/openai-csu-to-provide-chatgpt-edu-to-students-staff/ Wed, 05 Feb 2025 19:07:07 +0000 https://www.webpronews.com/?p=611442 OpenAI and California State University (CSU) are working together to provide ChatGPT Edu to more than 500,000 students, staff, and faculty.

ChatGPT Edu is specialized version of the AI model that is tuned for educational use. CSU’s adoption represents “the largest implementation of ChatGPT by any single organization or company anywhere in the world.” CSU will roll out the AI model to 23 campuses, giving it the distinction as being the first AI-powered university in the US.

OpenAI says the deployment will empower students in multiple areas.

The broad access to ChatGPT will allow CSU students to integrate AI into their studies, while faculty can use it to streamline administrative tasks, freeing up more time for teaching and research. This initiative will enhance the CSU experience and equip students with essential AI skills to succeed in an increasingly AI-literate workforce⁠(opens in a new window) and succeed in an increasingly AI-driven U.S. economy through AI initiatives⁠(opens in a new window) like:

  • Next-Gen Teaching and Learning: Faculty can leverage ChatGPT for curriculum development and interactive course-specific GPTs, while students benefit from personalized tutoring, study guides, and information retrieval.
  • AI Coaching: CSU will introduce a dedicated platform offering free training programs and certifications for all students, faculty, and staff, equipping them with the skills to effectively use AI tools like ChatGPT. This initiative ensures a comprehensive approach to AI skill-building across the university system.
  • AI Workforce Readiness: CSU will connect students with apprenticeship programs in AI-driven industries, ensuring graduates enter the workforce with in-demand AI skills.

OpenAI touts the benefits of its AI models in the context of learning and research.

Just two years after its launch, ChatGPT supports over 300 million weekly active users worldwide. Among its most popular uses is learning. Students and lifelong learners rely on ChatGPT for tutoring, personalized access to information across different formats and languages, and the flexibility to explore any topic, anytime. Early research suggests AI can significantly enhance educational outcomes and career readiness: Harvard⁠(opens in a new window) researchers found that an AI-powered tutor, customized for a physics course, doubled student engagement and improved problem-solving, particularly for those with less prior knowledge. Meanwhile, a Microsoft study⁠(opens in a new window) found that individuals with AI skills are over 70% more likely to be hired, highlighting the growing demand for AI proficiency in the workforce.

“AI-powered universities” have also emerged worldwide, with institutions like Arizona State University, ESCP, Harvard University, London Business School, Oxford, and The Wharton School taking steps to make AI as fundamental to their campus as using the internet. CSU’s deployment takes this model a step further, ensuring that students across an entire school system—not just a single campus—have access to this transformative technology.

CSU clearly wants to ensure it is on the leading edge of AI’s transformation of education, giving its students a leg up in a changing world and job market.

“The CSU has long been known for our unwavering commitment to access, equity and innovation. To uphold our mission, we must ensure that our diverse students from across California and staff are ‘AI-empowered’ to thrive as our world evolves,” said Dr. Mildred García, Chancellor of the California State University.

]]>
611442
Meta Will Not Develop ‘Critical Risk’ AI Models https://www.webpronews.com/meta-will-not-develop-critical-risk-ai-models/ Tue, 04 Feb 2025 19:58:50 +0000 https://www.webpronews.com/?p=611419 Meta has defined its AI policy to govern its future development, saying it will stop developing AI models it deems a “critical risk.”

Meta has been quietly emerging as one of the leading AI companies, but it is taking a far different approach than many of its competitors. While most companies are releasing proprietary AI models, Meta has open-sourced its Llama family of models.

One of the biggest challenges facing Meta, as well as the rest of the industry, is how to develop AI models that are safe and cannot be used in a harmful manner. Meta is a signatory of the Frontier AI Safety Commitments, and its new Frontier AI Framework is aligned with that agreement.

In the new policy, Meta outlines what its goals are, what catastrophic outcomes it must work to prevent.

We start by identifying a set of catastrophic outcomes we must strive to prevent, and then map the potential causal pathways that could produce them. When developing these outcomes, we’ve considered the ways in which various actors, including state level actors, might use/misuse frontier AI. We describe threat scenarios that would be potentially sufficient to realize the catastrophic outcome, and we define our risk thresholds based on the extent to which a frontier AI would uniquely enable execution of any of our threat scenarios.

By anchoring thresholds on outcomes, we aim to create a precise and somewhat durable set of thresholds, because while capabilities will evolve as the technology develops, the outcomes we want to prevent tend to be more enduring. This is not to say that our outcomes are fixed. It is possible that as our understanding of frontier AI improves, outcomes or threat scenarios might be removed, if we can determine that they no longer meet our criteria for inclusion. We also may need to add new outcomes in the future. Those outcomes might be in entirely novel risk domains, potentially as a result of novel model capabilities, or they might reflect changes to the threat landscape in existing risk domains that bring new kinds of threat actors into scope. This accounts for the ways in which frontier AI might introduce novel harms, as well its potential to increase the risk of catastrophe in known risk domains.
An outcomes-led approach also enables prioritization. This systematic approach will allow us to identify the most urgent catastrophic outcomes – i.e., cybersecurity and chemical and biological weapons risks – and focus our efforts on avoiding them rather than spreading efforts across a wide range of theoretical risks from particular capabilities that may not plausibly be presented by the technology we are actually building.

Meta breaks down exactly how it defines a critical risk AI and what action it will take in response.

Meta Frontier AI Framework Decision Making

We define our risk thresholds based on the extent to which a frontier AI would uniquely enable execution of any of our threat scenarios. A frontier AI is assigned to the critical risk threshold if we assess that it would uniquely enable execution of a threat scenario. If a frontier AI is assessed to have reached the critical risk threshold and cannot be mitigated, we will stop development and implement the measures outlined in Table 1. Our high and moderate risk thresholds are defined in terms of the level of uplift a frontier AI provides towards realising a threat scenario. We will develop these models in line with the processes outlined in this Framework, and implement the measures outlined in Table 1.

Meta Frontier AI Framework Table

Meta also says it will evaluate—and respond accordingly—to the possibility that high and moderate risk AI models could advance to a high-risk model.

We define our thresholds based on the extent to which frontier AI would uniquely enable the execution of any of the threat scenarios we have identified as being potentially sufficient to produce a catastrophic outcome. If a frontier AI is assessed to have reached the critical risk threshold and cannot be mitigated, we will stop development and implement the measures outlined in Table 1. Our high and moderate risk thresholds are defined in terms of the level of uplift a model provides towards realising a threat scenario. We will develop Frontier AI in line with the processes outlined in this Framework, and implement the measures outlined in Table 1. Section 3 on Outcomes & Thresholds provides more information about how we define our thresholds.

The AI Industry Needs More Open Safety Policies

Meta’s detailing and documenting its standards is a refreshing stance in an industry that appears to be recklessly rushing toward artificial general intelligence (AGI). Employees across the industry have warned there is not enough being done to ensure safe development.

By clearly defining what its safety goals are, and committing to halting development of critical risk models, Meta is setting itself apart as one of the few AI companies that is putting safety first and foremost, with Anthropic being another notable example.

Hopefully, other companies will take note and follow suit.

]]>
611419
Verizon Offers Google One AI Premium Half-Price https://www.webpronews.com/verizon-offers-google-one-ai-premium-half-price/ Tue, 04 Feb 2025 17:23:00 +0000 https://www.webpronews.com/?p=611410 Verizon has unveiled its latest perk for wireless customers, bundling Google One AI Premium for $10/mo, half the standard price.

Verizon’s deal with Google is an industry first, with the wireless carrier being the first of offer an advanced AI, let alone at such an affordable price.

Verizon isn’t just keeping up with the future — we’re building it. Being the first U.S. wireless provider to offer an AI-powered perk shows how serious we are about leading the way in innovation. And the value? Unmatched.

For just $10 a month, you’re unlocking your pass to Google’s next-gen AI with Gemini Advanced, Gemini in Google apps like Gmail and Docs, plus priority access to Google’s newest AI solutions — from new features to experimental models. These tools can completely transform how you work, learn and create. Whether you’re a busy professional looking for ways to save time, a student tackling big projects, or someone who just likes to experiment, this perk gives you tools that take your productivity and creativity to the next level.

“As the first U.S. wireless provider to offer an AI-powered perk at an incredible value, we’re putting the future of AI directly into our customers’ hands, making everyday tasks easier via Google One AI Premium,” said Sowmyanarayan Sampath, CEO Verizon Consumer. “We’ll continue to bring our mobile and internet customers new deals and even more ways to personalize their plans based on how they live, work and play.”

Interestingly, the Verizon plan appears to come with all the features of standard Google One AI Premium plan, including the 2 TB of cloud storage.

Get more done, faster with your personal tutor, analyst or coach. With Gemini Advanced, it’s like having a super-smart assistant by your side 24/7 to help you tackle tasks and spend more time on what’s most important. You can use Gemini in the Google apps you already know and love like Gmail, Docs, Meet, Slides and Sheets to write a draft, take meeting notes, create stunning presentations, visualize data and more.

Streamline your daily tasks. Create and use custom AI experts (“Gems”), for any topic, turning Gemini into your personal brainstorming partner, study helper or planning assistant.

Save hours on research. Analyze whole books and stacks of articles (up to 1,500 pages) or use Deep Research to browse hundreds of sites and create comprehensive reports in minutes to bring you up to speed on a topic.

Get more space for what’s important. With 2 TB of cloud storage, you’ll have plenty of space to keep your files, photos, and videos safely backed up to the cloud. Forget about running out of room or losing track of important stuff.

The announcement is good news for Google as the company continues to improve Gemini and compete with OpenAI and Anthropic.

]]>
611410
OpenAI Releases o3-mini, a STEM-Focused Reasoning Model https://www.webpronews.com/openai-releases-o3-mini-a-stem-focused-reasoning-model/ Sun, 02 Feb 2025 15:56:16 +0000 https://www.webpronews.com/?p=611373 OpenAI has released o3-mini, a new AI model in its reasoning series that focuses on STEM capabilities, especially coding, math, and science.

The AI firm announced the new AI model in a blog post.

OpenAI o3-mini is our first small reasoning model that supports highly requested developer features including function calling⁠(opens in a new window), Structured Outputs⁠(opens in a new window), and developer messages⁠(opens in a new window), making it production-ready out of the gate. Like OpenAI o1-mini and OpenAI o1-preview, o3-mini will support streaming⁠(opens in a new window). Also, developers can choose between three reasoning effort⁠(opens in a new window) options—low, medium, and high—to optimize for their specific use cases. This flexibility allows o3-mini to “think harder” when tackling complex challenges or prioritize speed when latency is a concern. o3-mini does not support vision capabilities, so developers should continue using OpenAI o1 for visual reasoning tasks. o3-mini is rolling out in the Chat Completions API, Assistants API, and Batch API starting today to select developers in API usage tiers 3-5⁠(opens in a new window).

OpenAI says its o1 models remains its flagship reasoning model, but o3-mini provides a specialized experience for those that need it.

In ChatGPT, o3-mini uses medium reasoning effort to provide a balanced trade-off between speed and accuracy. All paid users will also have the option of selecting o3-mini-high in the model picker for a higher-intelligence version that takes a little longer to generate responses. Pro users will have unlimited access to both o3-mini and o3-mini-high.

Interestingly, the o3-mini model outperforms o1 in some situations, especially within the STEM arena.

OpenAI o3-mini Live Coding – Credit OpenAI

Similar to its OpenAI o1 predecessor, OpenAI o3-mini has been optimized for STEM reasoning. o3-mini with medium reasoning effort matches o1’s performance in math, coding, and science, while delivering faster responses. Evaluations by expert testers showed that o3-mini produces more accurate and clearer answers, with stronger reasoning abilities, than OpenAI o1-mini. Testers preferred o3-mini’s responses to o1-mini 56% of the time and observed a 39% reduction in major errors on difficult real-world questions. With medium reasoning effort, o3-mini matches the performance of o1 on some of the most challenging reasoning and intelligence evaluations including AIME and GPQA.

OpenAI o3-mini Math – Credit OpenAI
OpenAI o3-mini Science – Credit OpenAI
OpneAI o3-mini Coding – Credit OpenAI
OpenAI o3-mini Software Development – Credit OpenAI

OpenAI also touts the speed and efficiency of the o3-mini model.

With intelligence comparable to OpenAI o1, OpenAI o3-mini delivers faster performance and improved efficiency. Beyond the STEM evaluations highlighted above, o3-mini demonstrates superior results in additional math and factuality evaluations with medium reasoning effort. In A/B testing, o3-mini delivered responses 24% faster than o1-mini, with an average response time of 7.7 seconds compared to 10.16 seconds.

The o3-mini models continues OpenAI’s efforts to offer a variety of AI models, tuned to specific tasks and uses.

]]>
611373
Microsoft Taps DeepSeek R1 for Azure AI Foundry and GitHub https://www.webpronews.com/microsoft-taps-deepseek-r1-for-azure-ai-foundry-and-github/ Thu, 30 Jan 2025 16:11:15 +0000 https://www.webpronews.com/?p=611329 Microsoft is joining the list of companies adopting DeepSeek R1, adding it as an available AI model in the company’s Azure AI Foundry and GitHub.

DeepSeek has been gaining fans and critics at a phenomenal rate after the company demonstrated that its AI model could rival or best OpenAI’s at a fraction of the cost. Former Intel CEO Pat Gelsinger said he is switching his startup from using OpenAI’s models to DeepSeek’s and Perplexity AI has added R1 to the list of AI models it uses.

Microsoft announced in a blog post that it is now making R1 available to its customers.

DeepSeek R1 is now available in the model catalog on Azure AI Foundry and GitHub, joining a diverse portfolio of over 1,800 models, including frontier, open-source, industry-specific, and task-based AI models. As part of Azure AI Foundry, DeepSeek R1 is accessible on a trusted, scalable, and enterprise-ready platform, enabling businesses to seamlessly integrate advanced AI while meeting SLAs, security, and responsible AI commitments—all backed by Microsoft’s reliability and innovation.

Microsoft touted DeepSeek, as well as other models, for their ability to help accelerate developers’ workflow.

One of the key advantages of using DeepSeek R1 or any other model on Azure AI Foundry is the speed at which developers can experiment, iterate, and integrate AI into their workflows. With built-in model evaluation tools, they can quickly compare outputs, benchmark performance, and scale AI-powered applications. This rapid accessibility—once unimaginable just months ago—is central to our vision for Azure AI Foundry: bringing the best AI models together in one place to accelerate innovation and unlock new possibilities for enterprises worldwide.

The company also says it is committed to delivering DeepSeek R1 in a safe and secure manner.

We are committed to enabling customers to build production-ready AI applications quickly while maintaining the highest levels of safety and security. DeepSeek R1 has undergone rigorous red teaming and safety evaluations, including automated assessments of model behavior and extensive security reviews to mitigate potential risks. With Azure AI Content Safety, built-in content filtering is available by default, with opt-out options for flexibility. Additionally, the Safety Evaluation System allows customers to efficiently test their applications before deployment. These safeguards help Azure AI Foundry provide a secure, compliant, and responsible environment for enterprises to confidently deploy AI solutions.

Microsoft Continues to Lessen Its Reliance On OpenAI

Microsoft and OpenAI’s relationship has been cooling for months, with Microsoft reportedly wanting to lessen its reliance on OpenAI and look for alternatives. The company has growing concerns over GPT’s “cost and speed for enterprise.”

“We incorporate various models from OpenAI and Microsoft depending on the product and experience,” Microsoft said in the statement in late 2024.

Incorporating DeepSeek’s AI model in Azure AI appears to be another step in that direction.

]]>
611329
Pat Gelsinger Is Switching His Startup From OpenAI to DeepSeek https://www.webpronews.com/pat-gelsinger-is-switching-his-startup-from-openai-to-deepseek/ Wed, 29 Jan 2025 12:00:00 +0000 https://www.webpronews.com/?p=611274 DeepSeek just got a major endorsement, with former CEO Pat Gelsinger saying he is switching his startup from using OpenAI to DeepSeek.

DeepSeek took the tech and AI industry by surprise, unveiling an AI model that matches the best from OpenAI, but at a fraction of the cost—as little as $3-$5 million. Even more impressive, the Chinese startup used Nvidia H800, chips specifically designed to comply with US export laws. As a result, the H800 offers less performance than Nvidia’s flagship chips.

The Chinese startup’s success has raised numerous questions regarding the AI industry, not the least of which is whether American companies are overvalued and if the AI bubble is about to burst. There are also questions about the effectiveness of US sanctions on Chinese firms, given that DeepSeek achieved its success using second-rate chips.

Beyond technological and political questions, DeepSeek is already gaining a myriad of fans and users, including Gelsinger. The longtime tech exec took to X to emphasize the transformational impact of DeepSeek’s achievement.

Wisdom is learning the lessons we thought we already knew. DeepSeek reminds us of three important learnings from computing history:

  1. Computing obeys the gas law. Making it dramatically cheaper will expand the market for it. The markets are getting it wrong, this will make AI much more broadly deployed.
  2. Engineering is about constraints. The Chinese engineers had limited resources, and they had to find creative solutions.
  3. Open Wins. DeepSeek will help reset the increasingly closed world of foundational AI model work. Thank you DeepSeek team.

Even more telling, Gelsinger told TechCrunch that his startup, Gloo, was adopting DeepSeek’s R1 model instead of paying for OpenAI’s o1.

“My Gloo engineers are running R1 today,” he said. “They could’ve run o1 — well, they can only access o1, through the APIs.”

Gelsinger went on to say that he believes DeepSeek will help usher in more affordable AI that can be deployed and integrated in far more devices.

“I want better AI in my Oura Ring. I want better AI in my hearing aid. I want more AI in my phone. I want better AI in my embedded devices, like the voice recognition in my EV,” he says.

If Gelsinger’s reaction is any indication, DeepSeek’s impact on the AI and tech industry could be far greater than critics believe—and that’s saying something.

]]>
611274
DeepSeek Deep Dive: What Is It and Why Is It Freaking Out the Tech World? https://www.webpronews.com/deepseek-deep-dive-what-is-it-and-why-is-it-freaking-out-the-tech-world/ Mon, 27 Jan 2025 21:09:32 +0000 https://www.webpronews.com/?p=611246 The landscape of artificial intelligence (AI) has undergone a significant upheaval with the introduction of DeepSeek R1, a model that’s not just a new entrant but a potential game-changer. In a comprehensive discussion on the Big Technology podcast, M.G. Siegler, a seasoned tech commentator and investor, peeled back the layers of this development, offering a nuanced examination of its implications.

DeepSeek R1, developed by a Chinese AI lab, has caught the industry off-guard with its prowess, matching the performance of giants like OpenAI’s o1 at a mere 3-5% of the cost. This efficiency is not just a market disruptor but a technical marvel that challenges the very foundations of how AI models have been developed and deployed.

Disrupting The AI Industry: Cost & Performance

DeepSeek R1’s benchmark performances are nothing short of impressive. “On the AIME mathematics test, it scored 79.8% compared to OpenAI’s 79.2%,” Siegler highlighted, underscoring its capability. The model also achieved a 97.3% accuracy on the MATH-500 benchmark, surpassing OpenAI’s 96.4%. These achievements come with a dramatic reduction in operational costs, with DeepSeek R1 running at “55 cents per million token inputs and $219 per million token outputs,” in stark contrast to OpenAI’s higher rates. This cost-performance ratio is a wake-up call for the industry, suggesting a shift towards more economically viable AI solutions.

Rating The AI Earthquake: Market Impact

The market has responded with what can only be described as shock. Siegler pointed out, “In pre-market trading, Nvidia was down 10 to 11%,” with other tech behemoths like Microsoft and Google also witnessing significant drops. This market reaction signals a potential reevaluation of investment in AI infrastructure, particularly in hardware like Nvidia’s GPUs, which have been at the heart of AI’s scaling narrative.

Technical Innovation: How DeepSeek Works

From a technical standpoint, DeepSeek R1’s architecture is a testament to innovation under constraint. “It’s based on a mixture-of-experts architecture,” Siegler explained, allowing the model to activate only necessary parameters for each query, thus optimizing for both speed and efficiency. This approach contrasts with the monolithic models that activate all parameters regardless of the task at hand, leading to higher computational and energy costs.

The model’s development involved a process of distillation from larger models to create compact yet potent versions. “They took, for example, a Llama model with 70 billion parameters and distilled it down,” said Siegler, outlining how DeepSeek managed to maintain high performance with fewer resources.

The Technology: Pure Reinforcement Learning

DeepSeek R1 diverges from the prevalent self-supervised learning methods by employing pure reinforcement learning (RL). “The models tend to figure out what’s the right answer on their own,” noted Siegler, indicating that this self-guided learning approach not only reduces the need for vast labeled datasets but also fosters unique reasoning capabilities within the model. This RL focus has allowed DeepSeek to fine-tune models through trial and error, improving their reasoning without the need for extensive human annotation, which is both cost and time-intensive.

Challenging The Scaling Hypothesis

The scaling hypothesis, which posits that performance increases with more compute, data, and time, is now under scrutiny. “DeepSeek has shown you can actually do all this without that,” Siegler remarked, suggesting that the era of simply scaling up might be nearing an end. This could potentially reduce the dependency on massive hardware investments, redirecting focus towards smarter, more efficient AI development strategies.

Market Reactions & Stock Impact

The immediate market fallout has been significant, with Nvidia’s stock plummeting. “It’s going to be pretty hard for this day at least,” Siegler observed, reflecting on the market’s knee-jerk reaction. However, some see this as a long-term opportunity for companies like Nvidia, where increased efficiency might spur demand for more specialized, less resource-heavy AI hardware.

Business Model Implications

The business implications are profound. Companies like Microsoft and Google, which have been integrating AI into their ecosystems, now face a dilemma. “If the underlying economics just totally changed overnight, what does that do to their models?” Siegler questioned. This might push these companies towards reimagining their AI offerings, possibly leading to price adjustments or new service models to align with the new cost structures.

Two Views on AI Spending

There’s a dichotomy in how this development is perceived. On one hand, there’s optimism that efficiency will lead to broader adoption and innovation. On the other, there’s caution about the implications for companies that have invested heavily in scaling. “Do we continue to spend billions for marginal gains, or do we leverage this efficiency to push towards practical AI applications?” Siegler pondered.

Silicon Valley’s Response

In response, tech leaders are attempting to calm the markets with narratives around increased efficiency leading to higher usage, with Nadella citing Jevons Paradox. “It feels like there’s a group text going on,” Siegler said, hinting at a coordinated message to reassure investors.

The Need for Real AI Applications

The ultimate test for DeepSeek R1 and similar models will be their application in real-world scenarios. “We need to see AI applications like we need to see an economy that takes use of this technology,” Siegler stressed. Despite the technological leaps, the real value of AI will only be realized when it translates into tangible economic activities, beyond proof of concepts.

Impact on AI Startups

For startups, DeepSeek’s model could be liberating. “If you can get models that are as performant with less spend, you’re going to see a lot more experimentation,” Siegler noted. This could democratize AI development, fostering innovation among smaller players who were previously deterred by high entry costs.

A New Paradigm in AI

As we move forward, the tech world must navigate this new terrain where efficiency trumps scale. “It won’t be so easy for all these companies to pull back spend because they’ve already committed,” Siegler warned, suggesting a complex transition where investment strategies will need recalibration. The next few months will be critical in determining whether DeepSeek R1 is a blip or a harbinger of a new AI era.

DeepSeek R1 has not just challenged the status quo but has potentially ushered in a new paradigm in AI development. As the industry adapts, the focus might shift from scaling up to scaling smart, where efficiency, accessibility, and practical application become the new benchmarks of success. For a deeper dive into tech trends, Siegler’s insights at Spyglass.org continue to illuminate the path forward in this ever-evolving landscape.

]]>
611246
OpenAI Admits It Needs ‘to Raise More Capital Than We’d Imagined’ https://www.webpronews.com/openai-admits-it-needs-to-raise-more-capital-than-wed-imagined/ Sun, 05 Jan 2025 17:00:00 +0000 https://www.webpronews.com/?p=610828 Anyone who thought OpenAI’s fundraising is nearing the end is in for a surprise, with the company admitting it needs “to raise more capital than we’d imagined” to succeed.

OpenAI has already raised billions of dollars in funding on the promise of developing true artificial general intelligence (AGI), the term used for AI that can rival human intelligence and learning capabilities. Simultaneously, the company is in the process of transitioning to a for-profit company, a move that has sparked legal challenges. At the same time, the company’s latest round of funding was provided on the condition that OpenAI successfully makes the transition within two years.

In that context, it’s not surprising the company has authored a blog post defending its transition and bracing the public for just how costly AI development will continue to be. The blog first highlights how the company was initially structured.

In 2019, we became more than a lab – we also became a startup. We estimated that we’d have to raise on the order of $10B to build AGI. This level of capital for compute and talent meant we needed to partner with investors in order to continue the non-profit’s mission.

We created a bespoke structure: a for-profit, controlled by the non-profit, with a capped profit share for investors and employees. We intended to make significant profits⁠(opens in a new window) to pay back shareholders, who make our mission possible, and have the remainder flow to the non-profit. We rephrased our mission to “ensure that artificial general intelligence benefits all of humanity” and planned to achieve it “primarily by attempting to build safe AGI and share the benefits with the world.” The words and approach changed to serve the same goal—benefiting humanity.

Moving forward, OpenAI says it will have to become more, establishing a structure that allows it to thrive and be sustainable.

As we enter 2025, we will have to become more than a lab and a startup — we have to become an enduring company. The Board’s objectives as it considers, in consultation with outside legal and financial advisors, how to best structure OpenAI to advance the mission of ensuring AGI benefits all of humanity have been:

  1. Choose a non-profit / for-profit structure that is best for the long-term success of the mission. Our plan is to transform our existing for-profit into a Delaware Public Benefit Corporation⁠(opens in a new window) (PBC) with ordinary shares of stock and the OpenAI mission as its public benefit interest. The PBC is a structure⁠(opens in a new window) used⁠ by⁠(opens in a new window) many⁠(opens in a new window) others⁠(opens in a new window) that requires the company to balance shareholder interests, stakeholder interests, and a public benefit interest in its decisionmaking. It will enable us to raise the necessary capital with conventional terms like others in this space.
  2. Make the non-profit sustainable. Our plan would result in one of the best resourced non-profits in history. The non-profit’s significant interest in the existing for-profit would take the form of shares in the PBC at a fair valuation determined by independent financial advisors. This will multiply the resources that our donors gave manyfold.
  3. Equip each arm to do its part. Our current structure does not allow the Board to directly consider the interests of those who would finance the mission and does not enable the non-profit to easily do more than control the for-profit. The PBC will run and control OpenAI’s operations and business, while the non-profit will hire a leadership team and staff to pursue charitable initiatives in sectors such as health care, education, and science.

In the blog post, OpenAI reveals the astonishing cost of AI development.

“The hundreds⁠(opens in a new window) of⁠(opens in a new window) billions⁠(opens in a new window) of⁠(opens in a new window) dollars that major companies are now investing into AI development show what it will really take for OpenAI to continue pursuing the mission. We once again need to raise more capital than we’d imagined. Investors want to back us but, at this scale of capital, need conventional equity and less structural bespokeness.”

OpenAI’s Dilemma

As the company outlines in its blog post, developing AGI is a costly endeavor, one in which no one knows the true cost. In addition, the clock is ticking for OpenAI to complete its transition to a for-profit. If the company fails to do so by the two-year deadline, it will have to return its latest round of funding. Meanwhile, Elon Musk has filed a lawsuit challenge OpenAI’s transition, as well as a temporary injunction to prevent the company from moving forward until the court can settle the matter.

At the same time, OpenAI has had a mass exodus of some of its best and brightest engineers, researchers, and executives over concerns the company has lost its way and is no longer focused on its original mission to develop AI safely. In fact, some departing executives have accused the company of prioritizing profits over safety.

Ultimately, 2025 could be a make-or-break year for OpenAI, and the company’s latest blog post could well be a recognition of that fact.

]]>
610828
Elon Musk’s xAI Raises $6 Billion in Funding https://www.webpronews.com/elon-musks-xai-raises-6-billion-in-funding/ Sat, 28 Dec 2024 20:20:35 +0000 https://www.webpronews.com/?p=610777 Elon Musk’s xAI has raised $6 billion in funding, the next step for the tech executive’s disruptive AI startup.

Musk has long been a proponent of safe AI research, leading him to co-found OpenAI. Since cutting ties with OpenAI, Musk founded xAI. The company has been growing rapidly, including plans to grow its Memphis supercomputer to 1 million GPUs.

In its latest round of funding, the startup has raised $6 billion from a host of investment companies.

xAI’s progress is accelerating rapidly.

We have closed our Series C funding round of $6 billion with participation from key investors including A16Z, Blackrock, Fidelity Management & Research Company, Kingdom Holdings, Lightspeed, MGX, Morgan Stanley, OIA, QIA, Sequoia Capital, Valor Equity Partners and Vy Capital, amongst others. Strategic investors NVIDIA and AMD also participated and continue to support xAI in rapidly scaling our infrastructure.

The company took the opportunity to tout the progress it has been making on its AI models.

xAI’s most powerful model yet, Grok 3, is currently training and we are now focused on launching innovative new consumer and enterprise products that will leverage the power of Grok, Colossus, and X to transform the way we live, work, and play.

The funds from this financing round will be used to further accelerate our advanced infrastructure, ship groundbreaking products that will be used by billions of people, and accelerate the research and development of future technologies enabling the company’s mission to understand the true nature of the universe.

xAI is primarily focused on the development of advanced AI systems that are truthful, competent, and maximally beneficial for all of humanity.

Musk has positioned Grok as a less politically correct AI model, one willing to tackle topics ChatGPT, Gemini, and others won’t touch.

The round of investment is a significant vote of confidence in xAI, and should go a long way toward helping further xAI’s ability to challenge the industry leaders.

]]>
610777
‘Godfather of AI’ Revises His Odds of AI Destroying Humanity https://www.webpronews.com/godfather-of-ai-revises-his-odds-of-ai-destroying-humanity/ Fri, 27 Dec 2024 18:51:56 +0000 https://www.webpronews.com/?p=610760 Professor Geoffrey Hinton, considered the “godfather of AI,” has revised his odds for risk AI poses to humanity—and it’s not good news for humans.

According to The Guardian, Hinton made his comments on BBC Radio 4’s Today program. Hinton has previously said that he believed there was a 10% chance of AI wiping out humanity in the next 30 years. The Today host asked if his estimate had changed.

“Not really, 10 to 20 [per cent],” Hinton replied.

The host pointed out that his response was different from his previous one, now citing as high as a 20% chance of AI destroying humanity.

“If anything,” Hinton acknowledged. “You see, we’ve never had to deal with things more intelligent than ourselves before.

“And how many examples do you know of a more intelligent thing being controlled by a less intelligent thing? There are very few examples. There’s a mother and baby. Evolution put a lot of work into allowing the baby to control the mother, but that’s about the only example I know of.”

“I like to think of it as: imagine yourself and a three-year-old. We’ll be the three-year-olds,” he added.

Hinton’s Vocal Criticism of AI Development

Hinton has been a vocal critic of AI development, resigning from his position at Google to sound the alarm regarding AI.

“The idea that this stuff could actually get smarter than people — a few people believed that,” Hinton said at the time. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.

“I don’t think they should scale this up more until they have understood whether they can control it,” he added.

The OpenAI Affair

Hinton was also proud of the fact that his former student, Ilya Sutskever, was one of the individuals who led the boardroom coup against OpenAI CEO Sam Altman, ousting him over concerns about safe AI development.

“I’d also like to acknowledge my students,” Hinton said in the video in October 2024. “I was particularly fortunate to have many very clever students, much clever than me, who actually made things work. They’ve gone on to do great things.

“I’m particularly proud of the fact that one of my students fired Sam Altman, and I think I better leave it there and leave it for questions.”

Hinton then went on to discuss the reasons behind Sutskever’s actions, specifically in the context of AI safety.

“So OpenAI was set up with a big emphasis on safety,” he said. “Its primary objective was to develop artificial general intelligence and ensure that it was safe.

“One of my former students Ilya Sutskever, was the chief scientist. And over time, it turned out that Sam Altman. Was much less concerned with safety than with profits. And I think that’s unfortunate.”

Given his history and credentials, when Hinton revises his odds on the risk AI poses, tech leaders and lawmakers would do well to take notice.

]]>
610760
OpenAI Releases Sora, Its AI Video Model https://www.webpronews.com/openai-releases-sora-its-ai-video-model/ Tue, 10 Dec 2024 19:18:34 +0000 https://www.webpronews.com/?p=610554 OpenAI is continuing to deliver new products, announcing the general release of Sora, its AI video generation model.

ChatGPT can already impressive images, but OpenAI has been working to revolutionize video creation with Sora, which has been in testing for some time. The company announced its release in a blog post.

Earlier this year, we introduced Sora⁠, our model that can create realistic videos from text, and shared our initial research progress⁠ on world simulation. Sora serves as a foundation for AI that understands and simulates reality—an important step towards developing models that can interact with the physical world.

We developed a new version of Sora—Sora Turbo— that is significantly faster than the model we previewed in February. We’re releasing it today as a standalone product at Sora.com to ChatGPT Plus and Pro users.

The model is still somewhat limited, with videos capped at 20 seconds.

Users can generate videos up to 1080p resolution, up to 20 sec long, and in widescreen, vertical or square aspect ratios. You can bring your own assets to extend, remix, and blend, or generate entirely new content from text.

We’ve developed new interfaces to make it easier to prompt Sora with text, images and videos. Our storyboard tool lets users precisely specify inputs for each frame.

We also have Featured and Recent feeds that are constantly updated with creations from the community.

OpenAI says it is working to ensure safe deployment of Sora, giving society time to adapt, while blocking dangerous and damaging types of videos.

We’re introducing our video generation technology now to give society time to explore its possibilities and co-develop norms and safeguards that ensure it’s used responsibly as the field advances.

All Sora-generated videos come with C2PA⁠(opens in a new window) metadata, which will identify a video as coming from Sora to provide transparency, and can be used to verify origin. While imperfect, we’ve added safeguards like visible watermarks by default, and built an internal search tool that uses technical attributes of generations to help verify if content came from Sora.

Today, we’re blocking particularly damaging forms of abuse, such as child sexual abuse materials and sexual deepfakes. Uploads of people will be limited at launch, but we intend to roll the feature out to more users as we refine our deepfake mitigations. You can read more about our approach to safety and monitoring in the system card⁠ as well as details on our red teaming efforts.

AI-powered video generation software is already changing the artistic landscape, and Sora promises to further that transformation.

Users looking give AI video generation a spin can try Sora out here.

]]>
610554
Grok Goes Free: xAI’s AI Chatbot Now Accessible to All Users https://www.webpronews.com/grok-goes-free-xais-ai-chatbot-now-accessible-to-all-users/ Sat, 07 Dec 2024 01:53:11 +0000 https://www.webpronews.com/?p=610499 In a move that could shake up the AI landscape, xAI has announced that its AI chatbot, Grok, is now available for free to all users on the X platform.

xAI’s decision marks a significant shift from the previous model where access was limited to those with a Premium subscription. The democratization of Grok could not only expand its user base but also challenge the dominance of other AI chatbots like ChatGPT, Google’s Gemini, and Anthropic’s Claude.

Grok, developed by Elon Musk’s xAI, was initially launched with much fanfare as a “humorous AI assistant” with a rebellious streak, drawing inspiration from Douglas Adams’ “Hitchhiker’s Guide to the Galaxy.” Until now, its availability was gated behind a paywall, restricting its use to a niche audience of tech enthusiasts and those willing to pay for the X Premium service. However, the latest update makes Grok available to all, albeit with some caveats.

The Caveats

Free users can now engage with Grok by sending up to 10 messages every two hours. This limitation is presumably in place to manage server load and ensure quality of service, but it’s a small price for the broader access to what has been described as one of the more engaging and less censored AI models available. For those who’ve been curious about Grok’s capabilities, from answering complex queries to generating images with fewer restrictions compared to other platforms, this is a golden opportunity.

The integration of Grok with the X platform means it has access to real-time information from X posts, providing users with responses that are not only current but also contextually relevant. This feature alone could make Grok a go-to resource for users looking for up-to-the-minute information or insights into what’s trending on the platform. Moreover, Grok’s ability to tackle “spicy” questions that other AI systems might shy away from adds a layer of utility for users interested in less conventional queries.

However, it’s worth noting that this free version isn’t without its boundaries. There are restrictions on the number of questions, image analyses, and perhaps most critically, the depth of interaction compared to what Premium users might enjoy. This strategy might be a clever way to entice users into eventually subscribing for an ad-free experience or more comprehensive AI interactions.

Possible Impact On the Market

The impact of this decision on the AI chatbot market could be profound. Currently, competitors like ChatGPT offer free tiers but with limitations on advanced features. Grok’s move might pressure these platforms to either lower their barriers or enhance their offerings to keep pace. For xAI, this could be a strategic play to gather more data, refine their AI, and build a larger community around their products.

From a user perspective, the free access to Grok could further democratize AI interaction, making advanced conversational AI a tool for the masses rather than just a premium service. It’s a step towards what the tech community has been advocating for—more accessible, transparent, and perhaps less censored AI systems that respect user privacy and encourage open dialogue.

As this development unfolds, the real test will be how well xAI manages the influx of new users, maintains service quality, and whether this move will spur innovation or lead to a dilution of the unique character Grok has cultivated. For now, tech enthusiasts, casual users, and anyone in between have a new playground to explore, courtesy of xAI making Grok available to all.

]]>
610499
OpenAI Will Reportedly Release AI Agent That Can Control Computers https://www.webpronews.com/openai-will-reportedly-release-ai-agent-that-can-control-computers/ Sun, 17 Nov 2024 14:00:00 +0000 https://www.webpronews.com/?p=610085 OpenAI is playing catch-up to Anthropic, with the company reportedly working to release AI agents that can control computers on users’ behalf.

Anthropic updated its Claude AI model in October, giving it the ability to control a user’s computer and perform tasks for them. OpenAI is reportedly preparing to do the same, according to Bloomberg.

Codenamed “Operator,” the new AI agent would be able to complete complex tasks, such as booking travel arrangements or writing programming code. Bloomberg says OpenAI leadership told staff in an internal meeting that the company plans to release Operator in January. The initial release will be a research preview, and be available via developer APIs.

]]>
610085
Google Launches Standalone Gemini iPhone App https://www.webpronews.com/google-launches-standalone-gemini-iphone-app/ Sun, 17 Nov 2024 00:54:33 +0000 https://www.webpronews.com/?p=610077 Google has launched a standalone Gemini app for the iPhone, bringing one of the leading AI models to Apple iOS users.

Gemini is deeply integrated into Android phones, much like Siri on the iPhone. Google is making its AI model available to iOS users via a standalone app.

The company announced the app’s release in a blog post.

iPhone users can now experience Gemini in a whole new way with our dedicated mobile app. In addition to using Gemini through the Google app on iOS or a web browser, iPhone users can enjoy a more streamlined Gemini experience, with easy access to features that help improve learning, creativity and productivity.

Google outlined some of the benefits Gemini brings.

  • Have a free-flowing conversation with Gemini Live on your iPhone: iPhone users can now talk to Gemini in a conversational manner, including interrupting to ask questions or change the topic. It’s great for when you want to practice for an upcoming interview, ask for advice on things to do in a new city, or brainstorm and develop creative ideas. You can personalize Gemini’s voice by choosing from 10 distinct voices. Gemini Live on iPhones is available now in over 10 languages, with more coming soon.
  • Study smarter with Gemini: Gemini makes learning easier, enabling you to ask questions about any subject and get tailored study plans. Gemini can also provide custom, step-by-step guidance that adapts to your learning style, and you can even test your knowledge with quizzes. For example, you can attach a complex diagram and ask Gemini to quiz you on it.
  • Generate dazzling images in Gemini: Imagen 3, our highest-quality image generation model yet, quickly transforms your text descriptions into stunning AI images. Whether you’re looking for the perfect image to share in your friends’ group chat or need a unique visual for a creative project, Imagen 3’s enhanced photorealism and accuracy can bring your ideas to life — even galaxy themed, almond-shaped nails — with incredible detail and vibrancy.
  • Access your favorite apps in Gemini: Gemini seamlessly connects with your favorite apps from Google. With Extensions5, Gemini can find and show you relevant information from the Google apps you use every day like YouTube, Google Maps, Gmail, Calendar and more — all within a single conversation.

Google Gemini has emerged as one of the leading AI models, alongside OpenAI’s ChatGPT, Anthropic’s Claude, and Perplexity AI. Google releasing it on iOS is no doubt an attempt by the company to solidify its place in iOS users’ workflows, especially as Apple rolls out its own Apple Intelligence.

]]>
610077
Red Hat Is Purchasing Neural Magic https://www.webpronews.com/red-hat-is-purchasing-neural-magic/ Fri, 15 Nov 2024 18:12:08 +0000 https://www.webpronews.com/?p=610068 Red Hat has announced a definitive agreement to purchase Neural Magic, a company specializing in “maximizing computational efficiency” of open-source AI models.

Neural Magic uses “software and algorithms that accelerate generative AI (gen AI) inference workloads.” Red Hat, and its upstream Fedora, has been working to become a leading development environment for AI technology. As a result, it’s not surprising the company is acquiring a startup specializing in improving the performance of open-source AI models.

Neural Magic’s expertise in inference performance engineering and commitment to open source aligns with Red Hat’s vision of high-performing AI workloads that directly map to customer-specific use cases and data, anywhere and everywhere across the hybrid cloud.

Red Hat emphasizes the need to make generative AI more accessible, especially given the ballooning computing power and energy demands AI models require.

Red Hat intends to address these challenges by making gen AI more accessible to more organizations through the open innovation of vLLM. Developed by UC Berkeley, vLLM is a community-driven open source project for open model serving (how gen AI models infer and solve problems), with support for all key model families, advanced inference acceleration research and diverse hardware backends including AMD GPUs, AWS Neuron, Google TPUs, Intel Gaudi, NVIDIA GPUs and x86 CPUs. Neural Magic’s leadership in the vLLM project combined with Red Hat’s strong portfolio of hybrid cloud AI technologies will offer organizations an open pathway to building AI strategies that meet their unique needs, wherever their data lives.

“AI workloads need to run wherever customer data lives across the hybrid cloud; this makes flexible, standardized and open platforms and tools a necessity, as they enable organizations to select the environments, resources and architectures that best align with their unique operational and data needs,” said Matt Hicks, president and CEO, Red Hat. “We’re thrilled to complement our hybrid cloud-focused AI portfolio with Neural Magic’s groundbreaking AI innovation, furthering our drive to not only be the ‘Red Hat’ of open source, but the ‘Red Hat’ of AI as well.”

“Open source has proven time and again to drive innovation through the power of community collaboration,” added Brian Stevens, CEO, Neural Magic. “At Neural Magic, we’ve assembled some of the industry’s top talent in AI performance engineering with a singular mission of building open, cross-platform, ultra-efficient LLM serving capabilities. Joining Red Hat is not only a cultural match, but will benefit companies large and small in their AI transformation journeys.”

]]>
610068
SUSE Announces SUSE AI, a Trusted Enterprise AI Platform https://www.webpronews.com/suse-announces-suse-ai-a-trusted-enterprise-ai-platform/ Tue, 12 Nov 2024 18:42:51 +0000 https://www.webpronews.com/?p=610018 SUSE, a leading provider of enterprise software and the creator of SUSE Linux Enterprise (SLE), announced the enterprise-grade SUSE AI.

AI is revolutionizing countless industries, but enterprise customers have stricter needs and requirements, especially in the realm of data security and privacy. SUSE AI is designed to meet those needs, providing “a secure, trusted platform” for generative AI, and is “an integrated cloud native solution.”

SUSE AI features include:

  • Secure by design: SUSE AI provides security and certifications at the software infrastructure level and tools that provide zero trust security, templates and playbooks for compliance. With SUSE AI, they can ensure that all use and processing of sensitive and private data remains private, significantly reducing the risk of data breaches and any unauthorized access. SUSE AI components are built with SUSE’s common criteria certified build system, which sanitizes the software and performs vulnerability scans. This ensures the security and integrity of those components and can reduce the impact of security breaches with enhanced incident response and recovery efforts.
  • Multifaceted trust: With SUSE AI, customers can trust their AI solutions – trust the security of the platform, trust that the generated data is correct and trust that their private customer and IP data stay private. They can deploy well-managed AI wherever their business needs it, from on-premise to hybrid to cloud and even in air-gapped environments. Enterprises can future-proof by taking advantage of the AI components provided as part of SUSE AI or bring their own AI tools to accommodate their unique use cases and increased workloads.
  • Choice: SUSE AI provides choice to customers by providing a secure platform on which they can select any AI components. Customers have full control over platform optimization and extension as well as flexibility in selecting and deploying large language models (LLMs). Simplified cluster operations and persistent storage along with easy access to pre-configured shared tools and services mean customers can rely on SUSE AI under any circumstance.

“AI is incredibly powerful, but without consideration, it has the potential to cause harm and damage reputations. As the value of GenAI – and the need for it – becomes more apparent, we are seeing customers struggle with compliance risks, shadow AI, and a lack of control, not to mention the vendor lock-in and skyrocketing costs associated with early-stage AI solutions,” said Abhinav Puri, Vice President of Portfolio Solutions at SUSE. “SUSE’s approach to AI, delivered in our SUSE AI solutions and the SUSE AI Early Access Program, helps address these issues for customers.”

“As enterprises demand for more secure, scalable AI solutions, our customers look to us for help in solving their critical challenges in privacy, compliance, and control,” added Nidhi Srivastava, TCS Global Head of AI.Cloud Offerings. “We are thrilled to partner with SUSE on the launch of its SUSE AI platform. Its new GenAI capabilities help to maintain security and sovereignty, which is essential in today’s regulatory landscape. Together with SUSE, we are committed to helping enterprises use AI to drive meaningful innovation while keeping their data protected.”

SUSE is an established name in the enterprise and open source world, especially in Europe. As a result, SUSE AI is sure to gain ground in an increasingly crowded AI market.

]]>
610018
X Is Making a Limited Version of Grok Available for Free https://www.webpronews.com/x-is-making-a-limited-version-of-grok-available-for-free/ Tue, 12 Nov 2024 16:42:47 +0000 https://www.webpronews.com/?p=610011 X is reportedly making a version of its Grok AI available to users for free, adding to the list of AI models available to the public.

X has been developing Grok AI, with the company investing heavily to advance its models. The company even built out its own cluster of 100,000 Nvidia H100s, rather than continuing to rely on Oracle to provide the necessary infrastructure.

According to software engineer Lohan Simpson, X is testing a free version, although with some limitations.

Opening up Grok to more users could help the company speed up development of the AI model.

]]>
610011
Google Photos Will Note When a Photo Is Edited With AI https://www.webpronews.com/google-photos-will-note-when-a-photo-is-edited-with-ai/ Mon, 28 Oct 2024 17:13:28 +0000 https://www.webpronews.com/?p=609620 In the ongoing battle to identify fake, AI-generated content, Google has revealed that Google Photos will note when an image is edited with the company’s AI tools.

As AI-generated pictures and videos become more common, companies are struggling to figure out the best way to flag such content and ensure it’s not used to mislead and deceive individuals. Google is taking a proactive approach, ensuring pictures edited with its tools are labeled as such.

The company outlined its plans in a blog post.

As we bring these tools to more people, we recognize the importance of doing so responsibly with our AI Principles as guidance. To further improve transparency, we’re making it easier to see when AI edits have been used in Google Photos. Starting next week, Google Photos will note when a photo has been edited with Google AI right in the Photos app.

Photos edited with tools like Magic Editor, Magic Eraser and Zoom Enhance already include metadata based on technical standards from The International Press Telecommunications Council (IPTC) to indicate that they’ve been edited using generative AI. Now we’re taking it a step further, making this information visible alongside information like the file name, location and backup status in the Photos app.

The company will also include information detailed if a photo is a composite, comprised of multiple other photos, even if it was not created using generative AI.

In addition to indicating when an image has been edited using generative AI, we will also use IPTC metadata to indicate when an image is composed of elements from different photos using non-generative features. For example, Best Take on Pixel 8 and Pixel 9, and Add Me on Pixel 9 use images captured close together in time to create a blended image to help you capture great group photos.

Experts have been warning for years that AI could be used to create untold harm through the use of deepfake photos and videos. Bad actors could ruin reputations, effect stock prices, and influence elections, and much more, unless appropriate safeguards are put in place.

Google is to be commended for taking a proactive stance, doing its part to ensure AI-generated content is flagged as such.

]]>
609620