Developer https://www.webpronews.com/developer/ Breaking News in Tech, Search, Social, & Business Tue, 18 Feb 2025 11:09:58 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 https://i0.wp.com/www.webpronews.com/wp-content/uploads/2020/03/cropped-wpn_siteidentity-7.png?fit=32%2C32&ssl=1 Developer https://www.webpronews.com/developer/ 32 32 138578674 Grok 3.0 Unveiled: A Technical Leap Forward in the AI Arms Race https://www.webpronews.com/grok-3-0-unveiled-a-technical-leap-forward-in-the-ai-arms-race/ Tue, 18 Feb 2025 11:09:11 +0000 https://www.webpronews.com/?p=611628 February 18, 2025 – The artificial intelligence landscape has been set ablaze with the official launch of Grok 3.0, the latest flagship model from Elon Musk’s xAI. Announced on Monday, February 17, at 8:00 PM Pacific Time via a live demo streamed on X, Grok 3.0 is being heralded as a game-changer in the fiercely competitive world of generative AI. With Musk dubbing it the “smartest AI on Earth” and tech leaders buzzing about its potential, this release marks a pivotal moment in AI development. From its unprecedented computational scale to its innovative training methodologies, here’s a deep dive into what makes Grok 3.0 a technical marvel—and how it stacks up against its rivals.

A Monumental Technical Achievement

At the heart of Grok 3.0’s prowess is its training infrastructure: xAI’s Colossus supercomputer, a behemoth powered by 200,000 Nvidia H100 GPUs. During the launch event, Musk revealed that Grok 3.0 was trained with ten times the computational power of its predecessor, Grok 2, and that the cluster size had doubled in just 92 days after an initial deployment of 100,000 GPUs in 122 days. This makes it the largest fully connected H100 cluster ever built, a feat xAI engineers described as “monumental” given the tight timeline.

“We didn’t have much time because we wanted to launch Grok 3 as quickly as possible,” an xAI executive explained during the demo. “We’ve used all this computing power to continuously improve the product along the way.” This scale is a significant escalation in the AI arms race, testing the limits of scaling laws—principles suggesting that larger compute and data lead to proportionally better performance. Gavin Baker, a prominent tech investor, noted on X in December 2024, “This will be the first real test of scaling laws for training, arguably since GPT-4. If scaling laws hold, Grok 3 should be a major leap forward in AI’s state of the art.”

Unlike many competitors relying on real-world data scraped from the web, Grok 3.0 leverages synthetic datasets designed to simulate diverse scenarios. Musk emphasized this shift during the World Governments Summit in Dubai on February 13, stating, “It’s trained on a lot of synthetic data and can reflect on its mistakes to achieve logical consistency.” This approach, combined with reinforcement learning and human feedback loops, aims to minimize “hallucinations”—AI-generated inaccuracies—by enabling the model to self-correct in real time. Early benchmarks showcased at the launch suggest this strategy is paying off, with Grok 3.0 outperforming rivals in science, math, and coding tasks.

What Tech and AI Leaders Are Saying

The announcement has sparked a flurry of reactions from industry luminaries. Elon Musk, ever the provocateur, claimed at the Dubai summit, “This might be the last time that an AI is better than Grok,” a bold assertion reflecting his confidence in xAI’s trajectory. During the launch, he praised the team’s efforts, saying, “Grok 3 is an order of magnitude more capable than Grok 2 in a very short period of time. It’s scary smart.”

Ethan Mollick, an AI researcher, commented on X post-launch: “Based on the announcement… X has caught up with the frontier of released models VERY quickly. If they continue to scale this fast, they are a major player.” Mollick also noted parallels with OpenAI’s playbook, suggesting xAI is adopting proven strategies while pushing boundaries with compute scale.

Not all feedback was universally glowing. Benjamin De Kraker, a former xAI engineer, had previously ranked Grok 3.0 below OpenAI’s o1 models in coding ability based on internal tests, a post that led to his resignation after xAI reportedly demanded its deletion. While this critique predates the final release, it underscores the high stakes and scrutiny surrounding Grok 3.0’s claims.

AI expert Dr. Alan D. Thompson praised Grok’s real-time data access via X integration, stating, “This feature sets it apart from competitors, offering fresh insights and potentially enhancing user experience with continuously updated information.” Meanwhile, posts on X from users like

@iruletheworldmo, claiming insider knowledge, hyped a reasoning model that “blows past full o3 scores,” amplifying anticipation.

Comparing Grok 3.0 to Rivals

Grok 3.0 enters a crowded field dominated by OpenAI’s ChatGPT (GPT-4o), Google’s Gemini, Anthropic’s Claude, and China’s DeepSeek R1. xAI showcased comparison benchmarks at the launch, asserting Grok 3.0 Reasoning surpasses Gemini 2 Pro, DeepSeek V3, and ChatGPT-4o in standardized tests like AIME 2025 (math), alongside coding and science tasks. A standout claim came from Chatbot Arena, where an early Grok 3.0 iteration (codename “chocolate”) scored 1402, the first model to break 1400, edging out OpenAI’s ChatGPT-4o-latest at 1377.

Technical Differentiators

  • Compute Scale: Grok 3.0’s 200,000-GPU training dwarfs ChatGPT-4o’s estimated 10,000–20,000 GPU cluster and DeepSeek’s leaner, cost-efficient approach. This brute-force scaling aligns with Musk’s vision of accelerating AI breakthroughs.
  • Synthetic Data & Self-Correction: Unlike GPT-4o and Gemini, which rely heavily on web-scraped data, Grok 3.0’s synthetic training reduces legal risks and biases, while its self-correcting mechanism aims for higher logical accuracy. OpenAI’s o1 and DeepSeek’s R1 also feature reasoning capabilities, but xAI claims Grok 3.0’s “Big Brain” mode offers superior adaptability.
  • Real-Time X Integration: A native advantage over rivals, Grok 3.0 pulls live data from X, making it uniquely responsive to current events—a capability ChatGPT and Gemini lack without external plugins.
  • Reasoning Models: Grok 3.0 Reasoning and its smaller sibling, Grok 3 mini Reasoning, mimic OpenAI’s o1 series by “thinking through” problems step-by-step. xAI asserts Grok 3.0 Reasoning beats o1-mini-high on AIME 2025, though independent verification is pending.

Features and Accessibility

Grok 3.0 introduces “DeepSearch,” a next-generation search engine rivaling OpenAI’s Deep Research, scanning X and the web for comprehensive answers. Multimodal capabilities—analyzing images alongside text—mirror ChatGPT-4o and Gemini, but xAI’s Flux-based image generation (enhanced by the new Auroria model) promises photorealistic precision. Voice mode, teased for release within a week, could challenge ChatGPT’s conversational edge.

Initially rolled out to X Premium+ subscribers ($50/month), Grok 3.0 also offers a standalone “SuperGrok” subscription ($30/month or $300/year) for unlimited queries and early feature access. This tiered model contrasts with ChatGPT’s broader free tier and DeepSeek’s open-source approach, potentially limiting Grok’s immediate reach.

Rival Responses

OpenAI, facing Musk’s $97.4 billion buyout bid (rejected in February), has doubled down with free reasoning models like o1. DeepSeek’s R1, built on a fraction of Western budgets, has disrupted the market, prompting xAI to accelerate Grok 3.0’s timeline. Google’s Gemini 2.0 series remains a formidable contender with its vast parameter count, though it lacks Grok’s real-time data edge.

The Bigger Picture

Grok 3.0’s launch isn’t just a technical milestone—it’s a statement. Musk’s xAI, founded in 2023, has catapulted from underdog to frontrunner in under two years, leveraging massive compute, synthetic data innovation, and X’s ecosystem. The model’s beta status—expect “imperfections at first,” Musk cautioned—belies its ambition: daily improvements aim to outpace rivals’ static updates.

Yet challenges loom. Grok’s X-centric data raises misinformation risks, a concern amplified by its less restrictive content policies. Independent benchmarks will determine if its performance claims hold against OpenAI’s polish, Google’s scale, and DeepSeek’s efficiency. Mollick’s X post hints at an API play, but its adoption remains uncertain amidst established ecosystems.

For now, Grok 3.0 stands as a testament to scaling laws’ enduring power and xAI’s relentless pace. As Musk mused during the demo, referencing Robert Heinlein’s “Stranger in a Strange Land,” “To grok is to deeply understand—and empathy is part of that.” Whether Grok 3.0 truly “groks” the world better than its rivals, it’s undeniably redefined the AI frontier. The race is far from over, but xAI has just fired a shot heard across the tech universe.

]]>
611628
Gnome Software Developers Consider Dropping RPM Support https://www.webpronews.com/gnome-software-developers-consider-dropping-rpm-support/ Mon, 17 Feb 2025 15:32:34 +0000 https://www.webpronews.com/?p=611593 The developers of Gnome Software have floated the idea of dropping support for RPM packages entirely, in favor of Flatpaks.

Gnome Software is the software center for the Gnome desktop environment (DE), and is a popular option for other DEs that don’t have their own software center, such as Xfce. Gnome Software is especially front-and-center on Fedora Workstation, given the amount of overlap between Fedora and Gnome developers.

In a mailing list post, user tqcharm recently recommended that Gnome Software completely remove support for RPMs, the native format for apps in the Red Hat/Fedora world.

Since the consensus seems to be that RPMs should be at the end of the priority list, what about decoupling (removing) RPMs from GNOME Software completely?

This might seem to be a step back, but it would make GNOME Software more consistent between Workstation and Silverblue, and support Fedora in its goal to make Flatpaks the primary packaging option.

That would leave RPMs to be a choice of the more advanced users, who seem to prefer the powerful dnf over GNOME Software anyway.

With RPMs missing from GNOME Software, prioritizing package sources becomes easier too: be it Fedora Core > Flathub Verified (or Probably Safe) -> Fedora Extended -> Flathub Extended or similar.

Michael Catanzaro, a Red Hat engineer, as well as a prominent Fedora and Gnome developer, replied with the following:

Removing RPM applications is my long term goal, but I’m not sure how quickly we’ll be able to get there.

Flatpaks, as well as Snaps, are a containerized app format that bundles all the necessary dependencies within the app, rather than relying on the underlying system. This is similar to how applications work on macOS, and solve many of the dependency issues that can arise when trying to have the latest software on older, point release distros.

Despite the advantages they offer, Flatpaks still have some disadvantages. For example, Flatpaks are designed primarily with desktop apps in mind, and are not suited for command-line apps. Flatpaks can also take up more space than traditional apps, although this becomes less of a factor as more Flatpaks are installed, since Flatpaks can share dependencies among themselves.

In addition, many Linux users still prefer traditional app package formats, such as RPMs in the Red Hat/Fedora/openSUSE world and DEBs in the Debian/Ubuntu world. There is also the question of how Gnome Software would handle native packages on other Linux distros, such as Debian and Ubuntu-based distros.

Ultimately, Gnome developers have a longstanding reputation for removing functionality the vast majority of users consider important, such as maximize/minimize window buttons, desktop icons, and more. The philosophy has contributed to many users transitioning to KDE Plasma, Cinnamon, or Xfce, all of which maintain the traditional desktop paradigm.

If Gnome developers move forward with this plan, it’s a move that will likely alienate even more users.

]]>
611593
NordVPN Passes Fifth No-Logs Audit https://www.webpronews.com/nordvpn-passes-fifth-no-logs-audit/ Fri, 14 Feb 2025 16:05:41 +0000 https://www.webpronews.com/?p=611580 NordVPN has achieved a major milestone, passing its fifth no-logs audit performed by an independent group of security researchers.

NordVPN is one of the most popular and widely respected VPN companies. Like many top-tier providers, NordVPN has a no-logs policy, meaning the service maintains no logs of its users’ activity. This ensures that NordVPN cannot turn over any user browsing data to authorities, since such data should not exist. Of course, a no-logs policy only provides protection if users have assurance the company honors, which is where third-party audits come in.

According to the company, for its fifth audit, NordVPN tapped Deloitte Audit Lithuania to perform the audit. Deloitte had access to NordVPN’s servers for over a month, from November 18 to December 20, 2024. During that time, the researchers conducted an exhaustive analysis of NordVPN’s operations.

  • A detailed analysis. The researchers interviewed our employees and inspected our server infrastructure and technical logs to complete the assessment procedure. The evaluation covered the configuration and deployment processes of standard VPN, Double VPN, Onion Over VPN, obfuscated servers, and P2P servers.
  • Current state assessment. Deloitte had access to our services from November 18 to December 20, 2024. During this period, it evaluated the systems and then provided insights. Deloitte conducted the assessment in accordance with the International Standard on Assurance Engagements 3000 (Revised) (ISAE 3000), established by the International Auditing and Assurance Standards Board (IAASB) with the aim of examining NordVPN’s IT system configuration and management.

“The trust we earn from our customers underscores everything we do in the cybersecurity industry. It’s a currency that’s hard to acquire and one we never take for granted. To maintain that trust, we not only strive each year to innovate and develop world-leading cybersecurity products, but we also fully commit to our promise not to monitor or record our users’ online traffic. Having this assurance reaffirmed by independent, globally respected researchers for the fifth time demonstrates that privacy isn’t just a buzzword at NordVPN — it’s in our DNA,” says Marijus Briedis, CTO at NordVPN.

This latest audit builds on NordVPN’s previous audits in 2018, 2022, 2022, and 2023.

]]>
611580
Becoming a Customer-Centric Custom Software Company: Challenges and Tips From a Chief Commercial Office https://www.webpronews.com/custom-software-company/ Fri, 14 Feb 2025 14:24:07 +0000 https://www.webpronews.com/?p=611574 An interesting fact: while 88% of companies admit that customer outcomes should be at the core of their business models, a mere 15% of C-level executives and sales departments practice a customer-centric approach. Harvard Business Review specialists came to this conclusion while researching details of over 1,000 initiatives on customer-centricity transformation for their new book. The majority of companies focus on finding buyers for their products rather than trying to understand their customers and cater to their needs.

Why is changing your mindset and becoming a customer-centric custom software company crucial? According to Deloitte, businesses that prioritize their customers make 60% more profit than their peers. The Economist Intelligence Unit states that 64% of C-levels who name customer centricity among their chief areas for investment are confident that they surpass their peers in profit. Other benefits of ensuring superb user experiences include improved retention, loyalty, and lifetime value of customers.

International IT company Andersen conducted an interview with Valentin Kuzmenko, Chief Commercial Officer and Vice President of Sales. In this article, he’s sharing practical advice on how to effectively harmonize a company’s vision with its customers’ aspirations, as well as explaining some common mistakes.

3 best practices for improved customer centricity

Interviewer: Valentin, directing every effort to deliver exceptional value to customers is a must for business growth. How do you achieve this?

Valentin: This can be done in three simple steps.

Firstly, the establishment of a customer-centric culture should start with top executives. Every specialist, from a helpdesk agent to a senior manager, should clearly understand what impact their activities have on customer success. 

How can this be implemented practically? The company’s budget should include acquiring software programs, holding events, and taking other measures that help improve customer experience. On the top level, decisions should be made to allocate funds to enhancing customer experience. 

Even though these investments don’t seem to have an immediate positive effect, they will pay off in the long run. Among their key benefits are increase in sales and customer loyalty rates along with higher employee engagement rates as your teams see their efforts being rewarded.

Your genuine interest in meeting customer needs on the management level can be expressed not only by financing customer-centric activities but also by taking customer opinions into account. Jeff Bezos, for example, symbolically left an empty chair in every meeting, which stood for his care for the users of Amazon’s products and the need to consider them when making decisions.

On other levels, we recommend democratizing company data as much as possible so that everyone is on the same page regarding your company’s projects and customer outcomes. It’s also a good practice to reward specialists who receive positive reviews and significantly contribute to customer success.

Secondly, every solution should be tailored to your customer’s unique business challenges. From receiving the customer’s initial contact form request to analyzing project requirements and actually building custom software or rendering services, every step of collaboration should be tailored to meet your customer’s unique business needs. To achieve this, we at Andersen frequently travel on business to visit our customers’ facilities in person, observe the production process, and better understand their pressing challenges.

Moreover, we make sure that our processes are of ultimate convenience for our prospects and customers. Signing the contract occurs in several clearly defined, transparent steps, and it takes about 10 business days from delivering the first CVs to assigning the chosen specialists to projects. Our enterprise-scale website, though complex, is clearly structured and boasts well-defined user journeys, impeccable performance, and visually appealing designs. Last but not least, we practice iterative Agile development, rapidly reacting to changing project requirements.

Thirdly, you should be continually soliciting feedback and acting upon it. Nurturing a customer-centric culture entails holding a continuous dialogue with your customers to carry out mutually beneficial projects. Customer feedback can come from various channels, including surveys, social media, and more. It’s also crucial to introduce robust analytics throughout customers’ decision-making paths to analyze your offers from the end-user’s perspective and personalize customer journeys. Based on these insights, companies should take action to adjust the ways they communicate and do business.

At Andersen, we went one step further in soliciting customer feedback and developed AI-fueled solutions that help its users improve the quality of customer interactions. Our AI Call Quality Assistant and AI Business Correspondence Assistant monitor employee adherence to scripts in real time, analyze tone and sentiment during communication and suggest improvements, measure customer satisfaction, and more.

3 traps a custom software company can fall into when ensuring customer centricity

Interviewer: What are the common hurdles that stand in the way of companies striving to become truly customer-centric?

Valentin: I’ll name three major challenges.

The first one is Insufficient resources. A software development company can fail to assign the necessary funds or appropriate specialists to a project for various reasons. For instance, its specialists aren’t properly trained, lack the needed qualifications, or don’t have the mastery of the latest technology stacks. As a result, customers might fall behind critical market trends and miss valuable opportunities.

The second one is delivery hurdles. When the responsibilities and roles on the developer’s and customer’s sides are explicitly outlined and there are solid communication channels and experienced project managers in place, teams are productive and deliver impressive results. Conversely, when these things are missing, delivering the required service becomes challenging.

Finally, the third one is flawed prioritization. This can happen when the company isn’t on the same page with the customer regarding strategic goals and product vision. The reasons may vary from inadequate analysis of business requirements and poor reporting to irregular communication and failure to adhere to Agile practices and be flexible to changes. As a result, the customer might fail to enter lucrative markets or earn substantial profits while their technology partner is busy solving less relevant issues.

Interviewer: What are the most effective solutions for these challenges?

Valentin: From my experience, a custom software company can solve the above challenges by having a robust discovery phase in place when the market situation and project requirements are thoroughly analyzed to set the right priorities. Furthermore, it must ensure its talent pool and qualifications are sufficient to address customer priorities through the chosen collaboration model – staff augmentation, product development services, or managed delivery. Ideally, the development should be carried out in iterations to ensure flexibility and customer approval of results at each step.

At Andersen, the company which I represent, we’re continuously improving to tackle challenges and make customer centricity the foundation of our daily operations, driving value and delivering tangible outcomes for our customers.

]]>
611574
Apple Adds Ability to Migrate Purchases Between Apple Accounts https://www.webpronews.com/apple-adds-ability-to-migrate-purchases-between-apple-accounts/ Thu, 13 Feb 2025 18:52:19 +0000 https://www.webpronews.com/?p=611561 Apple has made a major change to how the App Store works, finally giving users the ability to migrate purchases between Apple Accounts.

Until now, users who purchased apps via the App Store could not transfer them to another account, leading to doubled-up purchases in many cases. This was especially true in cases where individuals created a new Apple account and wanted to move their purchases to the new one. Apple has finally addressed the issue, as outlined in a new support article.

You can choose to migrate apps, music, and other content you’ve purchased from Apple on a secondary Apple Account to a primary Apple Account. The secondary Apple Account might be an account that’s used only for purchases. You’ll need access to the primary email address or phone number and password for both accounts, and neither account should be shared with anyone else. Learn more about how to migrate purchases.

  • At the time of migration, the Apple Account signed in for use with iCloud and most features on your iPhone or iPad will be referred to as the primary Apple Account.
  • At the time of migration, the Apple Account signed in just for use with Media & Purchases will be referred to as the secondary Apple Account.

When the transfer occurs, payment methods from the secondary account migrate to the new account. All “apps, music, movies, TV shows, and books” also migrate to the primary account, as do any active subscriptions.

This feature is long overdue and a welcome improvement for many users.

]]>
611561
Thomson Reuters Win AI Copyright Case, Spelling Trouble for AI Firms https://www.webpronews.com/thomson-reuters-win-ai-copyright-case-spelling-trouble-for-ai-firms/ Wed, 12 Feb 2025 20:35:27 +0000 https://www.webpronews.com/?p=611536 Thomson Reuters has won its case against Ross Intelligence, setting a legal precedent for how AI firms collect and use the vast quantities of data their models rely on.

The vast majority of AI companies have engaged in legally questionable behavior, hoovering up vast quantities of copyrighted data to use for training purposes. The firms have argued that fair use covers their activity, but that hasn’t stopped multiple companies and media outlets from suing various AI firms.

Thomson Reuters sued Ross Intelligence, a startup that has since shut down because of the cost of the legal battle, alleging copyright infringement. Specifically, Ross Intelligence was accused of using Thomson Reuters’ legal database as the basis for some of its AI-generated materials.

Notably, in his ruling, U.S. Circuit Judge Stephanos Bibas reversed his original decision, in which he initially ruled that a jury would need to decide the fair use aspect of the case.

A smart man knows when he is right; a wise man knows when he is wrong. Wisdom does not always find me, so I try to embrace it when it does––even if it comes late, as it did here.

I thus revise my 2023 summary judgment opinion and order in this case. See Fed. R. Civ. P. 54(b); D.I. 547, 548; Thomson Reuters Enter. Ctr. GmbH v. Ross Intel. Inc., 694 F. Supp. 3d 467 (D. Del. 2023). Now I (1) grant most of Thomson Reuters’s motion for partial summary judgment on direct copyright infringement and related defenses, D.I. 674; (2) grant Thomson Reuters’s motion for partial summary judgment on fair use, D.I. 672; (3) deny Ross’s motion for summary judgment on fair use, D.I. 676; and (4) deny Ross’s motion for summary judgment on Thomson Reuters’s copyright claims, D.I. 683.

Case Background

Judge Biba then goes on to summarize the case, acknowledging that Thomson Reuters’ Westlaw database is one of the largest legal databases in the U.S., with the company licensing its contents to users. In an effort to build a competing database, Ross asked to license Westlaw content. Because Ross’ stated goal was to build a competitor to Westlaw, Thomson Reuters understandably declined to license its content to the firm.

In what has been a common refrain among AI firms when they can’t legally access data they want/need for their AI models, Ross moved ahead anyway.

So to train its AI, Ross made a deal with LegalEase to get training data in the form of “Bulk Memos.” Id. at 5. Bulk Memos are lawyers’ compilations of legal questions with good and bad answers. LegalEase gave those lawyers a guide explaining how to create those questions using Westlaw headnotes, while clarifying that the lawyers should not just copy and paste headnotes directly into the questions. D.I. 678-36 at 5–9. LegalEase sold Ross roughly 25,000 Bulk Memos, which Ross used to train its AI search tool. See D.I. 752-1 at 5; D.I. 769 at 30 (10:48:35). In other words, Ross built its competing product using Bulk Memos, which in turn were built from Westlaw headnotes. When Thomson Reuters found out, it sued Ross for copyright infringement.

The Headnotes and Key Number System Questions

At the heart of the case was whether Ross infringed copyright by copying Westlaw headnotes based on their originality.

The headnotes are original. A headnote is a short, key point of law chiseled out of a lengthy judicial opinion. The text of judicial opinions is not copyrightable. Banks v. Manchester, 128 U.S. 244, 253–54 (1888). And even if it were, Thomson Reuters would not get that copyright because it did not write the opinions. But a headnote can introduce creativity by distilling, synthesizing, or explaining part of an opinion, and thus be copyrightable. That is why I have changed my mind.

First, the headnotes are a compilation. “Factual compilations” are original if the compiler makes “choices as to selection and arrangement” using “a minimal degree of creativity.” Feist, 499 U.S. at 348. Thomson Reuters’s selection and arrangement of its headnotes easily clears that low bar.

More than that, each headnote is an individual, copyrightable work. That became clear to me once I analogized the lawyer’s editorial judgment to that of a sculptor. A block of raw marble, like a judicial opinion, is not copyrightable. Yet a sculptor creates a sculpture by choosing what to cut away and what to leave in place. That sculpture is copyrightable. 17 U.S.C. § 102(a)(5). So too, even a headnote taken verbatim from an opinion is a carefully chosen fraction of the whole. Identifying which words matter and chiseling away the surrounding mass expresses the editor’s idea about what the important point of law from the opinion is. That editorial expression has enough “creative spark” to be original. Feist, 499 U.S. at 345. So all headnotes, even any that quote judicial opinions verbatim, have original value as individual works. That belated insight explains my change of heart. In my 2023 opinion, I wrongly viewed the degree of overlap between the headnote text and the case opinion text as dispositive of originality. 694 F. Supp. 3d at 478. I no longer think that is so. But I am still not granting summary judgment on any headnotes that are verbatim copies of the case opinion (for reasons that I explain below).

Similarly, Ross Intelligence copied Westlaw’s Key Number System, although it did not present the Key Number System to customers.

“The Key Number System is original too. There is no genuine issue of material fact about the Key Number System’s originality. Recall that Westlaw uses this taxonomy to organize its materials. Even if “most of the organization decisions are made by a rote computer program and the high-level topics largely track common doctrinal topics taught as law school courses,” it still has the minimum “spark” of originality. Id. at 477 (internal quotation marks omitted); Feist, 499 U.S. at 345. The question is whether the system is original, not how hard Thomas Reuters worked to create it. Feist, 499 U.S. at 359–60. So whether a rote computer program did the work is not dispositive. And it does not matter if the Key Number System categorizes opinions into legal buckets that any first-year law student would recognize. To be original, a compilation need not be “novel,” just “independently created by” Thomson Reuters. Id. at 345–46. There are many possible, logical ways to organize legal topics by level of granularity. It is enough that Thomson Reuters chose a particular one.

The Fair Use Issue

The biggest issue of all, however, was whether Ross’ actions fell under Fair Use, a legal doctrine that allows copyrighted material to be used under specific circumstances. In his ruling, Judge Biba reiterated that he was reversing his initial ruling, including on the fair use question, before outlining the four specific factors that must be considered.

I must consider at least four fair-use factors: (1) the use’s purpose and character, including whether it is commercial or nonprofit; (2) the copyrighted work’s nature; (3) how much of the work was used and how substantial a part it was relative to the copyrighted work’s whole; and (4) how Ross’s use affected the copyrighted work’s value or potential market. 17 U.S.C. § 107(1)–(4). The first and fourth factors weigh most heavily in the analysis. Authors Guild v. Google, Inc., 804 F.3d 202, 220 (2d Cir. 2015) (Leval, J.).

Factor One – The Purpose and Character of Ross’ Use

Judge Biba said the first factor went in favor of Thomson Rueters, ruling that Ross’ commercial intentions and lack of any type of transformative nature of Ross’ use of Westlaw data argued against fair use.

Ross’s use is not transformative. Transformativeness is about the purpose of the use. “If an original work and a secondary use share the same or highly similar purposes, and the second use is of a commercial nature, the first factor is likely to weigh against fair use, absent some other justification for copying.” Warhol, 598 U.S. at 532–33. It weighs against fair use here. Ross’s use is not transformative because it does not have a “further purpose or different character” from Thomson Reuters’s. Id. at 529.

But because Ross’s use was commercial and not transformative, I need not consider this possible element. Even if I found no bad faith, that finding would not outweigh the other two considerations.

Factor Two – The Nature of the Original Work

The second factor went in favor of Ross. This factor went back to the creativity involved in Westlaw’s headnotes, and whether they met the threshold to warrant fair use protection.

Westlaw’s material has more than the minimal spark of originality required for copyright validity. But the material is not that creative. Though the headnotes required editorial creativity and judgment, that creativity is less than that of a novelist or artist drafting a work from scratch. And the Key Number System is a factual compilation, so its creativity is limited.

So factor two goes for Ross. Note, though, that this factor “has rarely played a significant role in the determination of a fair use dispute.

Factor Three – How the Work Was Used and Was Relative to the Whole

The third factor also went in favor of Ross.

My prior opinion did not decide factor three but suggested that it leaned towards Ross. The opinion focused on Ross’s claim that its output to an end user is a judicial opinion, not a West headnote, so it “communicates little sense of the original.” 649 F. Supp. 3d at 485 (quoting Authors Guild, 804 F.3d at 223).

I stand by that reasoning, but now go a step further and decide factor three for Ross. There is no factual dispute: Ross’s output to an end user does not include a West headnote. What matters is not “the amount and substantiality of the portion used in making a copy, but rather the amount and substantiality of what is thereby made accessible to a public for which it may serve as a competing substitute.” Authors Guild, 804 F.3d at 222 (internal quotation marks omitted). Because Ross did not make West headnotes available to the public, Ross benefits from factor three.

Factor Four – The Effect of Ross Copying Westlake

Judge Biba cites Harper & Row in saying this fourth factor “is undoubtedly the single most important element of fair use.”

My prior opinion left this factor for the jury. I thought that “Ross’s use might be transformative, creating a brand-new research platform that serves a different purpose than Westlaw.” 694 F. Supp. 3d at 486. If that were true, then Ross would not be a market substitute for Westlaw. Plus, I worried whether there was a relevant, genuine issue of material fact about whether Thomson Reuters would use its data to train AI tools or sell its headnotes as training data. Id. And I thought a jury ought to sort out “whether the public’s interest is better served by protecting a creator or a copier.” Id.

In hindsight, those concerns are unpersuasive. Even taking all facts in favor of Ross, it meant to compete with Westlaw by developing a market substitute. D.I. 752-1 at 4. And it does not matter whether Thomson Reuters has used the data to train its own legal search tools; the effect on a potential market for AI training data is enough. Ross bears the burden of proof. It has not put forward enough facts to show that these markets do not exist and would not be affected.

The Decision

Ultimately, when taking the above factors into consideration, Judge Biba rejected Ross’ fair-use defense.

Factors one and four favor Thomson Reuters. Factors two and three favor Ross. Factor two matters less than the others, and factor four matters more. Weighing them all together, I grant summary judgment for Thomson Reuters on fair use.

I grant partial summary judgment to Thomson Reuters on direct copyright infringement for the headnotes in Appendix A. For those headnotes, the only remaining factual issue on liability is that some of those copyrights may have expired or been untimely created. This factual question underlying copyright validity is for the jury. I also grant summary judgment to Thomson Reuters against Ross’s defenses of innocent infringement, copyright misuse, merger, scenes à faire, and fair use. I deny Ross’s motions for summary judgment on direct copyright infringement and fair use. I revise all parts of my prior opinions that conflict with this one. I leave undisturbed the parts of my prior opinion not addressed in this one, such as my rulings on contributory liability, vicarious liability, and tortious interference with contract.

“We are pleased that the court granted summary judgment in our favor and concluded that Westlaw’s editorial content, created and maintained by our attorney editors, is protected by copyright and cannot be used without our consent. The copying of our content was not ‘fair use,'” the company said in a statement.

The Implications of the Decision

The implications of Judge Biba’s decision will reach far and wide within the AI industry, and should serve as a warning to AI companies throughout the industry who have engaged in similar practices.

Meta, for example, is involved in a court case in which its own internal emails detail the questions and concerns staff had about pirating more than 80 TB of tens of millions of books. Those same emails implicate OpenAI for allegedly engaging in the same behavior, including pirating books from the same sources.

AI firms have maintained that fair use covers their activities, making it legal to hoover up any and all data, regardless of copyright status. Judge Biba’s decision, on the other hand, raises major questions about that argument.

If Judge Biba’s ruling is used as a legal precedent by the many other AI copyright cases being litigated, it could spell disaster for the AI industry, leaving firms and their executives liable for untold sums in damages and even facing potential criminal charges.

]]>
611536
Meta Pirated 80 TB of Copyrighted Books to Train AI https://www.webpronews.com/meta-pirated-80-tb-of-copyrighted-books-to-train-ai/ Wed, 12 Feb 2025 20:19:41 +0000 https://www.webpronews.com/?p=611533 Meta is in hot water, with internals emails showing the company torrented more than 80 TB of copyrighted and pirated books from questionable online databases.

Meta is one of several companies in hot water over how it trains its AI models, with a legal case accusing the company of pirating tens of millions of books from questionable sources.

The plaintiffs describe the extent of Meta’s actions in their complaint:

“However it is done, torrenting pirated works is flagrantly illegal.6 And the magnitude of Meta’s unlawful torrenting scheme is astonishing: just last spring, Meta torrented at least 81.7 terabytes of data across multiple shadow libraries through the site Anna’s Archive, including at least 35.7 terabytes of data from Z-Library and LibGen. Pritt Decl., Ex. H.7 Meta also previously torrented 80.6 terabytes of data from LibGen (Sci-Mag).”

The plaintiffs emphasize the legal ramifications of Meta’s actions, especially compared to established legal precedent:

“Vastly smaller acts of data piracy—just .008% of the amount of copyrighted works Meta pirated—have resulted in Judges referring the conduct to the U.S. Attorneys’ office for criminal investigation.”

Meta’s Emails Undermine the Company’s Case

The extent of Meta’s activities wasn’t fully known until internal emails were made public, painting a damning picture of the company’s action.

Below are excerpts from the court filing (thanks to Ars Technica for hosting the document):

Melanie Kambadur stated on a message chain, “I don’t think we should use pirated material. I really need to draw a line there.” The four messages that follow are redacted.

Joelle Pineau responds to Eleonora Presani’s statement that “using pirated material should be beyond our ethical threshold.” Ms. Pineau then asks, “You think it’s problematic to use even for this phase?” followed by a redacted sentence. Presani then says “SciHub, ResearchGate, LibGen are basically like PirateBay or something like that, they are distributing content that is protected by copyright and they’re infringing it.”

This document appears to be notes from a January 2023 meeting that Mark Zuckerberg attended. It is heavily redacted, including a large section titled “Legal Escalations.” Immediately after that section the document states “[Zuckerberg] wants to move this stuff forward,” and “we need to find a way to unblock all this.”

Nikolay Bashlylov suggested that Meta conceal its downloading of LibGen data using a VPN (“Can we load libgen data using Meta IP ranges? Or should we use some vpn?”). All three bullet points that follow are redacted.

In an internal message, Nikolay Bashlykov expresses concern about using Meta IP addresses “to load through torrents pirate content,” and says, “torrenting from a corporate laptop doesn’t feel right :).” A response from David Esiobu is redacted.

This document contains admissions that Meta knew that LibGen was pirated (i.e., illegal) and expresses concern over what will happen if regulators learn that Meta is training Llama on pirated copyrighted data. The “Legal Risk” section is entirely redacted.

This document shows Meta employees deciding not to use “FB [Facebook] infra[structure]” for its “data downloading” from pirated databases in order to “avoid[] risk of tracing back the seeder/downloader [] from FB servers.”

The emails even implicate OpenAI for allegedly engaging in identical behavior.

On a message thread, Sy Choudhury discusses OpenAI’s use of LibGen. An update from “the partnerships side” is redacted.

On a message chain, Erin Murray explains that OpenAI’s model is likely trained on Smashwords and LibGen. The latter half of her message, addressed to Beau James, an in-house counsel, is redacted.

The above communications paint a damning picture of a company and its executives that knowingly crossed the line in an effort to train its AI models, even taking action to limit the fallout and legal repercussions should its actions ever be discovered.

]]>
611533
EU to Spend $206 Billion to Catch Up In the AI Race https://www.webpronews.com/eu-to-spend-206-billion-to-catch-up-in-the-ai-race/ Tue, 11 Feb 2025 18:22:37 +0000 https://www.webpronews.com/?p=611516 The EU is preparing to spend big in an effort to catch up in the AI race, setting 200 billion euros ($206.15 billion) to challenge the U.S. and China.

The EU has been falling behind the U.S. and China, thanks in no small part to far stricter regulation. Apple and Meta have both said they will not bring their most advanced AI capabilities to the bloc as a result of an uncertain regulatory future. What’s more, the bulk of the world’s leading AI companies are based in either the U.S. or China.

According to The Wall Street Journal, the EU is getting serious about changing the status quo, EU Commission President Ursula von der Leyen announcing the InvestAI plan. The plan includes $20 billion to help establishing AI gigafactories for training AI models.

“We want Europe to be one of the leading AI continents, and this means embracing a life where AI is everywhere,” von der Leyen said at the AI Action Summit in Paris.

“Too often I hear that the EU is late to the race,” von der Leyen added, insisting the AI race is “far from being over.”

The day before the summit, French President Emmanuel Macron emphasized the need for the EU to simplify its AI regulation in an effort to be more competitive.

“Tomorrow, President von der Leyen will announce the European AI strategy and it will be a very important occasion. But this strategy will be a unique opportunity for Europe to accelerate, to simplify our regulations, to deepen the single market and to invest as well in computing capacities,” he said.

]]>
611516
California AG Puts AI Firms On Notice https://www.webpronews.com/california-ag-puts-ai-firms-on-notice/ Mon, 10 Feb 2025 18:23:51 +0000 https://www.webpronews.com/?p=611502 California Attorney General Rob Bonta has issued a legal advisory, putting AI firms on notice about activities that may not be legal.

California is at the epicenter of much of the AI development within the U.S., with Silicon Valley serving as home of many of the leading AI firms. As a result, the firms fall within the jurisdiction of California, which has some of the strictest privacy laws in the country.

The legal advisory acknowledges the good that AI can be used to accomplish.

AI systems are at the forefront of the technology industry, and hold great potential to achieve scientific breakthroughs, boost economic growth, and benefit consumers. As home to the world’s leading technology companies and many of the most compelling recent developments in AI, California has a vested interest in the development and growth of AI tools. The AGO encourages the responsible use of AI in ways that are safe, ethical, and consistent with human dignity to help solve urgent challenges, increase efficiencies, and unlock access to information—consistent with state and federal law.

The advisory then goes on the describe the challenges A systems pose, and the potential threats they may bring.

AI systems are proliferating at an exponential rate and already affect nearly all aspects of everyday life. Businesses are using AI systems to evaluate consumers’ credit risk and guide loan decisions, screen tenants for rentals, and target consumers with ads and offers. AI systems are also used in the workplace to guide employment decisions, in educational settings to provide new learning systems, and in healthcare settings to inform medical diagnoses. But many consumers are not aware of when and how AI systems are used in their lives or by institutions that they rely on. Moreover, AI systems are novel and complex, and their inner workings are often not understood by developers and entities that use AI, let alone consumers. The rapid deployment of such tools has resulted in situations where AI tools have generated false information or biased and discriminatory results, often while being represented as neutral and free from human bias.

The AG’s office outlines a number of laws that govern AI use, including the state’s Unfair Competition Law, False Advertising Law, several competition laws, a number of civil rights laws, and the state’s election misinformation prevention laws.

The advisory also delves into California’s data protection laws and they role they play in AI development and use cases.

AI developers and users that collect and use Californians’ personal information must comply with CCPA’s protections for consumers, including by ensuring that their collection, use, retention, and sharing of consumer personal information is reasonably necessary and proportionate to achieve the purposes for which the personal information was collected and processed. (Id. § 1798.100.) Businesses are prohibited from processing personal information for non-disclosed purposes, and even the collection, use, retention, and sharing of personal information for disclosed purposes must be compatible with the context in which the personal information was collected. (Ibid.) AI developers and users should also be aware that using personal information for research is also subject to several requirements and limitations. (Id. § 1798.140(ab).) A new bill signed into law in September 2024 confirms that the protections for personal information in the CCPA apply to personal information in AI systems that are capable of outputting personal information. (Civ. Code, § 1798.140, added by AB 1008, Stats. 2024, ch. 804.) A second bill expands the definition of sensitive personal information to include “neural data.” (Civ. Code, § 1798.140, added by SB 1223, Stats. 2024, ch. 887.)

The California Invasion of Privacy Act (CIPA) may also impact AI training data, inputs, or outputs. CIPA restricts recording or listening to private electronic communication, including wiretapping, eavesdropping on or recording communications without the consent of all parties, and recording or intercepting cellular communications without the consent of all parties. (Pen. Code, § 630 et seq.) CIPA also prohibits use of systems that examine or record voice prints to determine the truth or falsity of statements without consent. (Id. § 637.3.) Developers and users should ensure that their AI systems, or any data used by the system, do not violate CIPA.

California law contains heightened protection for particular types of consumer data, including education and healthcare data that may be processed or used by AI systems. The Student Online Personal Information Protection Act (SOPIPA) broadly prohibits education technology service providers from selling student data, engaging in targeted advertising using student data, and amassing profiles about students, except for specified school purposes. (Bus. & Prof. Code, § 22584 et seq.) SOPIPA applies to services and apps used primarily for “K-12 school purposes.” This includes services and apps for home or remote instruction, as well as those intended for use at a public or private school. Developers and users should ensure any educational AI systems comply with SOPIPA, even if they are marketed directly to consumers.

The advisory also cites the state’s Confidentiality of Medical Information Act (CMIA) which governs how patient data is used, as well as the required disclosures before that data can be shared with outside companies.

The AG’s notice concludes by emphasizing the need or AI companies to remain vigilant about the various laws and regulations that may impact their work.

Beyond the laws and regulations discussed in this advisory, other California laws—including tort, public nuisance, environmental and business regulation, and criminal law—apply equally to AI systems and to conduct and business activities that involve the use of AI. Conduct that is illegal if engaged in without the involvement of AI is equally unlawful if AI is involved, and the fact that AI is involved is not a defense to liability under any law.

This overview is not intended to be exhaustive. Entities that develop or use AI have a duty to ensure that they understand and are in compliance with all state, federal, and local laws that may apply to them or their activities. That is particularly so when AI is used or developed for applications that could carry a potential risk of harm to people, organizations, physical or virtual infrastructure, or the environment.

Conclusion

The AG’s notice serves as a warning shot to AI firms, emphasizing that they are not above existing law, just because they are creating industry-defining technology.

Many legal issues surrounding AI are currently being decided in the court system, although some experts fear AI companies are moving so fast that any legal decisions clarifying the legality of their actions may come too late to have any appreciable effect.

California, at least, appears to be taking a tougher stance, putting firms on notice that they must adhere to existing law, or face the consequences.

]]>
611502
Google Is Adding Delete for Everyone to RCS Chats https://www.webpronews.com/google-is-adding-delete-for-everyone-to-rcs-chats/ Fri, 07 Feb 2025 13:00:00 +0000 https://www.webpronews.com/?p=611463 Google is adding a major new features to RCS chats, one that will put it closer to feature parity with Signal and WhatsApp.

According to the folks over at Android Authority, a teardown of the latest Android APKs shows that Google is adding the ability to delete messages for everyone in a chat. This is a significant improvement over the current version of RCS chats included in Messenger, which only deletes a message on your local device, not for everyone else in the chat.

AA found code that shows users will have a choice between “Delete for everyone” and “Delete for me.” This is virtually identical to the options Signal and WhatsApp present users. The code indicates that Messages currently notifies users when a message is deleted, or when a user attempts to delete one.

RCS is the next evolution of traditional text messaging. Unlike its predecessor, SMS, RCS offers advanced features like those found in iMessage, WhatsApp, and Signal, including group management, read receipts, file transfers, and improved security.

Google’s initial implementation lacked a number of features, including end-to-end encryption (E2EE) and Delete for Everyone. As RCS adoption continues, however, Google is continuing to add features.

RCS also stands as one of the best options for cross-platform texting with Apple’s iMessae, with Apple finally adopting RCS for cross-platform conversations. The only major limitation is a lack of E2EE when communicating between platforms, but the two companies are working to resolve it.

As AA points out, there’s no guarantee the feature will make it into Messages but, if it does, it would be a welcome addition.

]]>
611463
Red Hat Working to Integrate AI Into Fedora and GNOME https://www.webpronews.com/red-hat-working-to-integrate-ai-into-fedora-and-gnome/ Wed, 05 Feb 2025 12:00:00 +0000 https://www.webpronews.com/?p=611425 Christian F.K. Schaller, Director of Software Engineering at Red Hat, says the company is working to integrate IBM’s AI models into Fedora Workstation and GNOME.

IBM, which owns Red Hat, has been developing its Granite line of AI models, designed specifically for business applications. IBM has released Granite 3.0, its latest version, under the Apache 2.0 license, a permissive license that makes it ideal for open source projects.

Schaller says Red Hat is working to incorporate Granite into Fedora and GNOME, giving Linux users access to a variety of AI-powered tools.

One big item on our list for the year is looking at ways Fedora Workstation can make use of artificial intelligence. Thanks to IBMs Granite effort we know have an AI engine that is available under proper open source licensing terms and which can be extended for many different usecases. Also the IBM Granite team has an aggressive plan for releasing updated versions of Granite, incorporating new features of special interest to developers, like making Granite a great engine to power IDEs and similar tools. We been brainstorming various ideas in the team for how we can make use of AI to provide improved or new features to users of GNOME and Fedora Workstation. This includes making sure Fedora Workstation users have access to great tools like RamaLama, that we make sure setting up accelerated AI inside Toolbx is simple, that we offer a good Code Assistant based on Granite and that we come up with other cool integration points.

Wayland Improvements

Schaller goes on to detail several other improvements, starting with Wayland, the successor to the X11 window manager. Last year saw a bit of drama with Wayland development, with GNOME developers often accused of holding up progress, or blocking protocols they don’t see a need for within GNOME itself.

Schaller addresses those issues, highlighting the value of the “ext” namespace for extensions to Wayland that may not appeal to every desktop environment, but still serve a valuable purpose for some.

The Wayland community had some challenges last year with frustrations boiling over a few times due to new protocol development taking a long time. Some of it was simply the challenge of finding enough people across multiple projects having the time to follow up and help review while other parts are genuine disagreements of what kind of things should be Wayland protocols or not. That said I think that problem has been somewhat resolved with a general understanding now that we have the ‘ext’ namespace for a reason, to allow people to have a space to review and make protocols without an expectation that they will be universally implemented. This allows for protocols of interest only to a subset of the community going into ‘ext’ and thus allowing protocols that might not be of interest to GNOME and KDE for instance to still have a place to live.

Flatpak Improvements

Similarly, Flatpak saw major improvements in 2024. Flatpak is a containerized application format that includes all necessary dependencies, rather than rely on the underlying system. As a result, Flatpak is ideal for installing the latest and greatest version of a package—even on stable releases like Debian—without worrying about conflicts or risking destabilizing the system.

Because of its containerized nature, however, Flatpaks have traditionally had some limitations, such as connecting to USB devices. Schaller highlights the progress that was made, thanks to the USB portal implementation.

Some major improvements to the Flatpak stack has happened recently with the USB portal merged upstream. The USB portal came out of the Sovereign fund funding for GNOME and it gives us a more secure way to give sandboxed applications access to you USB devcices. In a somewhat related note we are still working on making system daemons installable through Flatpak, with the usecase being applications that has a system daemon to communicate with a specific piece of hardware for example (usually through USB). Christian Hergert got this on his todo list, but we are at the moment waiting for Lennart Poettering to merge some pre-requisite work into systemd that we want to base this on.

Other Improvements

Schaller touts the additional improvements being made, including to High Dynamic Range (HDR), PipeWire audio server, MIPI camera support, accessibility, Firefox, and the GNOME Software software app.

Fedora’s developers have made it clear that they want the distro, which serves as an upstream for Red Hat Enterprise Linux, to be “the best community platform for AI.” Integrating IBM’s Granite is a major step in that direction.

]]>
611425
Google Relies On AI-Assist Threat Detection to Keep Android Safe https://www.webpronews.com/google-relies-on-ai-assist-threat-detection-to-keep-android-safe/ Mon, 03 Feb 2025 16:32:24 +0000 https://www.webpronews.com/?p=611389 Google has released a new security blog post, detailing how the company worked to keep Android and Google Play safe from bad actors during 2024.

According to the company, Google blocked 2.36 million apps from being published because they violated Google Play policies. The company also banned more than 158,000 developer accounts for attempting to publish harmful apps. In addition, Google also stopped 1.3 million apps from gaining excess or unnecessary access to users’ sensitive data.

The company broke down exactly how it managed to accomplish such impressive measures, with AI playing a major role.

To keep out bad actors, we have always used a combination of human security experts and the latest threat-detection technology. In 2024, we used Google’s advanced AI to improve our systems’ ability to proactively identify malware, enabling us to detect and block bad apps more effectively. It also helps us streamline review processes for developers with a proven track record of policy compliance. Today, over 92% of our human reviews for harmful apps are AI-assisted, allowing us to take quicker and more accurate action to help prevent harmful apps from becoming available on Google Play.

Google also worked heavily with developers to less apps’ reliance on sensitive user data.

To protect user privacy, we’re working with developers to reduce unnecessary access to sensitive data. In 2024, we prevented 1.3 million apps from getting excessive or unnecessary access to sensitive user data. We also required apps to be more transparent about how they handle user information by launching new developer requirements and a new “Data deletion” option for apps that support user accounts and data collection. This helps users manage their app data and understand the app’s deletion practices, making it easier for Play users to delete data collected from third-party apps.

We also worked to ensure that apps use the strongest and most up-to-date privacy and security capabilities Android has to offer. Every new version of Android introduces new security and privacy features, and we encourage developers to embrace these advancements as soon as possible. As a result of partnering closely with developers, over 91% of app installs on the Google Play Store now use the latest protections of Android 13 or newer.

Google also touted Google Play’s multi-layered protection features.

To create a trusted experience for everyone on Google Play, we use our SAFE principles as a guide, incorporating multi-layered protections that are always evolving to help keep Google Play safe. These protections start with the developers themselves, who play a crucial role in building secure apps. We provide developers with best-in-class tools, best practices, and on-demand training resources for building safe, high-quality apps. Every app undergoes rigorous review and testing, with only approved apps allowed to appear in the Play Store. Before a user downloads an app from Play, users can explore its user reviews, ratings, and Data safety section on Google Play to help them make an informed decision. And once installed, Google Play Protect, Android’s built-in security protection, helps to shield their Android device by continuously scanning for malicious app behavior.

While the Play Store offers best-in-class security, we know it’s not the only place users download Android apps – so it’s important that we also defend Android users from more generalized mobile threats. To do this in an open ecosystem, we’ve invested in sophisticated, real-time defenses that protect against scams, malware, and abusive apps. These intelligent security measures help to keep users, user data, and devices safe, even if apps are installed from various sources with varying levels of security.

Google Play Protect automatically scans every app on Android devices with Google Play Services, no matter the download source. This built-in protection, enabled by default, provides crucial security against malware and unwanted software. Google Play Protect scans more than 200 billion apps daily and performs real-time scanning at the code-level on novel apps to combat emerging and hidden threats, like polymorphic malware. In 2024, Google Play Protect’s real-time scanning identified more than 13 million new malicious apps from outside Google Play1.

Google’s 2024 Revelations Are Good News for Users

Google’s revelations regarding Google Play and Android security in 2024 is good news for its users. Apple has long been known as the privacy option when it comes to mobile ecosystems. Google, on the other hand, has repeatedly been accused of abusing user privacy.

While it’s certainly reassuring to see how many malicious apps and developers Google successfully blocked, it’s also good to see the company take definitive steps to stop apps from accessing excessive or unnecessary sensitive user data.

As the world’s largest mobile operating system, Android is used by billions of individuals. Unfortunately, the operating system is built and maintained by the world’s largest advertising company, meaning it also serves as a way for Google to make money off of its users via their data and serving them ads.

Seeing Google take measures to improve privacy and limit app access to sensitive data is a win for users—even if the company still has a ways to go before it matches Apple.

]]>
611389
Microsoft Kills Its Microsoft Defender VPN for Individuals & Families https://www.webpronews.com/microsoft-kills-its-microsoft-defender-vpn-for-individuals-families/ Sun, 02 Feb 2025 16:45:17 +0000 https://www.webpronews.com/?p=611383 Microsoft has made a move that is sure to ruffle some features, eliminating its VPN service for Microsoft 365 Personal and Family subscriptions.

Microsoft recently raised the price of its Microsoft 365 subscriptions, but that hasn’t stopped the company from stripping away an important feature from the Personal and Family plan. The company previously offered a VPN as part of its Defender package, giving users an alternative to paid options from other companies.

The company announced the change in a support article, saying it the VPN option will go away February 28, 2025. The company says it made the decision after evaluating the feature’s usage and effectiveness.

Our goal is to ensure you, and your family remain safer online. We routinely evaluate the usage and effectiveness of our features. As such, we are removing the privacy protection feature and will invest in new areas that will better align to customer needs.

The company touted the benefits Defender continues to provide for Personal and Family plans.

Microsoft Defender continues to provide data and device protection, identity theft and credit monitoring (US only), plus threat alerts to keep you safer online. Microsoft Defender requires a Microsoft 365 Personal or Family subscription.

Microsoft 365 Personal: Your Microsoft 365 subscription enables you to protect up to five devices. US subscribers can monitor their credit and 60+ types of personal info, get 24/7 identity theft support, and up to $1 million identity insurance coverage for restoration-related legal and expert fees & up to $100,000 for lost funds1 recovery.

Microsoft 365 Family: Your Microsoft 365 subscription enables you to protect up to 5 devices per person. US subscribers can activate identity theft monitoring, for each family member including 60+ types of data and credit score, get 24/7 identity theft support, and up to $1 million identity insurance coverage for restoration-related legal and expert fees & up to $100,000 for lost funds1 recovery.

Microsoft also said Defender for iOS will continue to offer a VPN, but it is different from what was offered in the Personal and Family plans.

Defender for iOS users, please note, web protection (anti-phishing) on iOS uses a VPN to help keep you safer from harmful links. You will continue to see a VPN used for the purposes of web protection and this local (loop-back) VPN is different from the privacy protection feature.

Users looking for an affordable VPN that is widely considered to be the best on the market should take a look at Mullvad as a replacement option.

]]>
611383
SoftBank May Invest $25 Billion In OpenAI https://www.webpronews.com/softbank-may-invest-25-billion-in-openai/ Fri, 31 Jan 2025 17:57:58 +0000 https://www.webpronews.com/?p=611363 SoftBank is the latest company interested in making a massive investment in OpenAI, with the company reportedly looking to invest as much as $25 billion.

OpenAI has been wooing investors as it continues to spend money at an extraordinary rate in its quest for true artificial intelligence. Microsoft has been one of the company’s largest investors, but the relationship between the two companies appears to be cooling.

According to Financial Times, via TechCrunch, SoftBank could invest between $15 and $25 billion dollars in the AI firm. The investment would be in addition to the $15 billion it plans to invest in the US Stargate AI project.

As the outlets point out, the investment would be SoftBank’s largest since its failed WeWork bet. What’s more, the investment would also serve to give OpenAI more independence from Microsoft.

]]>
611363
AI-Powered DevSecOps: Forecasting the Future of Software Security in 2025 https://www.webpronews.com/ai-powered-devsecops-forecasting-the-future-of-software-security-in-2025/ Fri, 31 Jan 2025 12:05:03 +0000 https://www.webpronews.com/?p=611358 In the rapidly evolving tech landscape of 2025, artificial intelligence (AI) is not just enhancing software development—it’s revolutionizing security practices within DevSecOps, the integration of development, security, and operations. Here’s a detailed look at the transformative predictions shaping this sector:

AI-Driven Vulnerability Management

2025 marks a significant leap in how vulnerabilities are managed, with AI playing a pivotal role. Beyond merely detecting security flaws, AI systems now offer remediation strategies, learning from vast datasets to suggest fixes tailored to specific software environments.

“AI-powered DevSecOps is fundamentally changing the landscape of software development and cybersecurity,” notes Ayal Cohen of OpenText in a recent post on X.

The Evolution to ‘Shift Everywhere’

What was once known as the “shift-left” approach in security—where security is addressed early in the development cycle—has evolved. We’re witnessing a “shift everywhere” paradigm, where AI ensures security is omnipresent, from code conception to post-deployment monitoring.

This development means developers can work directly with real-time security insights in their integrated development environments (IDEs), while CI/CD pipelines use AI for continuous security checks.

The Dual Role of AI in Cybersecurity

AI’s role in 2025 is as much about defense as the potential for offense. While AI enhances threat detection and incident response, it also enables attackers to craft more sophisticated threats.

“AI’s potential in cybersecurity is a double-edged sword, empowering both defenders and attackers,” shares Ayal Cohen in another X post. This necessitates a nuanced approach to AI implementation in security protocols.

API Security: A New Frontier

With the rise of microservices and cloud-native applications, APIs have become critical, and securing them is paramount. AI is at the forefront here, predicting and thwarting threats by understanding and monitoring API behavior patterns.

“API security is moving from a technical concern to a boardroom imperative,” according to a post by vmblog on X, with AI being pivotal in managing the security of these interfaces.

The Rise of Integrated Development Platforms

The siloed toolsets of the past are giving way to AI-enhanced, integrated platforms. These platforms not only streamline development but also bake security into every step, reducing the cognitive load on developers and enhancing productivity.

Addressing Security Debt with AI

As AI accelerates development, it also risks increasing security debt. However, AI is also the proposed antidote, capable of scaling remediation efforts to match the speed of development, ensuring vulnerabilities are identified and addressed swiftly.

As we stand in 2025, integrating AI into DevSecOps is proving to be a game-changer for software security. The industry is at a crossroads where the benefits of AI must be balanced with the potential risks it introduces.

The tech sector’s challenge is clear: harness AI to make software development faster and fundamentally more secure. This year might well be remembered as the moment AI truly began to reshape the security landscape, for better or worse, in software development.

]]>
611358
AI Supercharges Developer Productivity: Transforming Code Creation to System Maintenance https://www.webpronews.com/ai-supercharges-developer-productivity-transforming-code-creation-to-system-maintenance/ Thu, 30 Jan 2025 11:38:42 +0000 https://www.webpronews.com/?p=611293 Artificial Intelligence (AI) has become the catalyst for a productivity renaissance in the high-velocity world of software development, where demand outstrips supply. For professional developers, AI isn’t just another tool; it’s a transformative force that reshapes the entire software lifecycle. Here’s how AI revolutionizes development for those at the forefront of code creation, testing, maintenance, and beyond.

Code Creation: Beyond Autocomplete

AI has transcended simple code suggestions to become an integral part of the coding process. Tools like GitHub Copilot or DeepMind’s AlphaCode now offer intelligent code completion beyond syntax, proposing entire functions or algorithms based on context, project history, and global codebases.

What was once a solitary task has evolved into pair programming with AI, where the machine suggests alternative implementations, highlights potential improvements, or alerts to security vulnerabilities in real time. This shift allows developers to bypass boilerplate code, focusing instead on high-level logic and innovative architecture.

Testing: Comprehensive and Predictive

In the realm of testing, AI has introduced a predictive element. It generates test cases, including those that human testers might not conceive, by learning from vast datasets of code, bugs, and fixes. This results in enhanced test coverage with less manual effort. AI also optimizes CI/CD pipelines by predicting which tests are most likely to fail, prioritizing them, or suggesting which tests can be safely removed, accelerating deployment cycles and improving release reliability.

Maintenance and Monitoring: From Reactive to Predictive

The maintenance phase has significantly shifted from reactive to predictive thanks to AI. Systems now monitor applications in production, detecting anomalies in performance, security, or user behavior. AI can predict potential issues before they escalate, alerting developers in time to take preventative actions. Moreover, when vulnerabilities or bugs surface, AI can suggest patches based on historical data, dramatically speeding up the resolution process. The pinnacle of this trend is self-healing systems where AI autonomously implements fixes, reducing downtime and the urgency for human intervention.

Documentation and Knowledge Management

AI also plays a crucial role in documentation, automatically updating or generating documentation as code changes, ensuring that technical documentation remains both current and comprehensive. Beyond documentation, AI enhances knowledge management by analyzing code, commit messages, and issues to build a dynamic knowledge base, which can answer developer queries about project history or architectural decisions.

Challenges and Considerations

While AI’s integration into development is largely beneficial, it presents some challenges. Developers must adapt to this new paradigm, learning to critically interpret AI’s suggestions while maintaining their creativity and problem-solving skills. There’s a delicate balance to strike to avoid over-reliance on AI, which could potentially stifle innovation or introduce biases if not managed with ethical considerations in mind.

AI is Not Replacing Developers

AI is not replacing developers but augmenting their capabilities, making them more efficient, creative, and focused on delivering value through complex problem-solving. The future of development is a symbiotic relationship between AI and human developers, where each enhances the other’s strengths. For the professional developer, mastering this integration is not just about keeping up; it’s about leading in an industry that’s increasingly intertwined with artificial intelligence.

]]>
611293
DevOps 2025: AI Integration, Enhanced Security, and the Convergence of MLOps https://www.webpronews.com/devops-2025-ai-integration-enhanced-security-and-the-convergence-of-mlops/ Thu, 30 Jan 2025 11:23:11 +0000 https://www.webpronews.com/?p=611290 The year 2025 marks a pivotal point where traditional methodologies are not just enhanced but redefined by the integration of Artificial Intelligence (AI), a strategic focus on security through DevSecOps, and the convergence with Machine Learning Operations (MLOps). This article explores these intertwined trends, their implications for the industry, and the roadmap ahead.

AI Integration in DevOps: The New Frontier

The integration of AI into DevOps, often termed AI/CD (AI-driven Continuous Deployment), represents a paradigm shift. AI’s role in DevOps transcends automation; it’s about prediction, optimization, and self-healing systems.

  • Predictive Analysis: AI algorithms now forecast potential failures or performance bottlenecks before they impact production. Tools like machine learning models analyze historical data from deployments, tests, and logs to predict outcomes with high accuracy.
  • Optimization of Processes: AI-driven optimization goes beyond simple automation. It involves dynamically adjusting resources, optimizing code deployment strategies, or even suggesting architectural changes based on real-time performance data.
  • Self-Healing Systems: Perhaps the most revolutionary aspect is the development of systems that can autonomously diagnose and fix issues. This reduces downtime, enhances reliability, and shifts human effort from reactive maintenance to proactive innovation.

Enhanced Security: The DevSecOps Evolution

Security in DevOps has evolved from an afterthought to a foundational element, leading to the concept of DevSecOps. In 2025, this evolution is characterized by:

  • Security Automation: Security checks are now integrated into every step of the CI/CD pipeline. From code scanning for vulnerabilities to automated compliance checks, security is built into the product from the ground up.
  • Zero Trust Architecture: With the rise of remote work and cloud services, the zero trust model has become central. Every access, whether from within or outside the network, is authenticated, authorized, and continuously validated.
  • AI in Security: Machine learning models assist in anomaly detection, predicting potential security breaches, and even suggesting remediation strategies. This symbiosis of AI with security practices ensures a more resilient application ecosystem.

The Convergence of DevOps and MLOps

The integration of DevOps with MLOps signifies a fundamental shift towards what we can call “AIOps” – AI Operations. Here’s how they converge:

  • Unified Pipelines: Previously separate pipelines for software and model deployment are now converging. This means a single pipeline can handle the deployment of both code and models, ensuring consistency, version control, and traceability.
  • DataOps Integration: The management of data, crucial for both DevOps and MLOps, has led to the rise of DataOps. This ensures data quality, availability, and compliance, facilitating both software and model development.
  • Shared Tools and Practices: Tools like Kubernetes, Docker, and Git have become staples not just for software but also for machine learning models. Practices like blue-green deployments or canary releases are now applied to models, ensuring safe updates and rollbacks.

Challenges and Considerations

While these trends promise a more efficient, secure, and innovative future, they come with challenges:

  • Complexity Management: The integration of various domains increases system complexity, necessitating advanced skills in orchestration and management.
  • Ethical and Privacy Concerns: AI models, particularly in security and predictive analytics, must be developed with ethical considerations in mind, respecting privacy and avoiding bias.
  • Cultural Shift: The convergence requires a cultural shift towards embracing continuous learning, not just for technology but for the methodologies of how teams work together.

All-Encompassing Approach

By 2025, DevOps has transcended its original scope to become an all-encompassing approach where AI, security, and machine learning operations blend seamlessly. This evolution is not just about adopting new tools or technologies but about fostering a new culture of development that is anticipatory, secure, and inherently intelligent.

As we move forward, the key to success will be in how well organizations can adapt to this multifaceted, dynamic environment, ensuring they leverage these trends to drive innovation while managing the inherent challenges effectively.

]]>
611290
Italy Investigates DeepSeek Over Privacy Concerns https://www.webpronews.com/italy-investigates-deepseek-over-privacy-concerns/ Wed, 29 Jan 2025 14:27:54 +0000 https://www.webpronews.com/?p=611280 The Italian government is joining the growing list of entities concerned about Chinese AI startup DeepSeek, launching an investigation over privacy issues.

DeepSeek has quickly gained recognition for its impressive AI model, one that rivals the best OpenAI has to offer. Even more impressive is the fact that DeepSeek built its model for a mere $3-$5 million, a fraction of the $100 million it cost OpenAI, while doing it with second-rate Nvidia hardware.

The Italian data and privacy watchdog, the Garante Per La Protezione Dei Dati Personali (GPDP), announced it was launching an investigation of DeepSeek over “possible risk for data from millions of people in Italy.”

The GPDP made the announcement on its official website (machine translated):

The Guarantor for the protection of personal data has sent a request for information to Hangzhou DeepSeek Artificial Intelligence and to Beijing DeepSeek Artificial Intelligence, the companies that provide the DeepSeek chatbot service, both on the web platform and on the App.

Given the possible high risk for the data of millions of people in Italy, the Authority asked the two companies and their affiliates to confirm what personal data are collected, from which sources, for what purposes, what the basis is legal treatment, and whether they are stored on servers located in China.

The Guarantor also asked the companies what type of information is used to train the artificial intelligence system and, in the event that personal data is collected through web scraping activities, to clarify how users registered and those not registered in the service have been or are informed about the processing of their data.

Given that DeepSeek is a Chinese AI firm, it’s a safe bet this is not the last investigation it will face.

]]>
611280
Linux Foundation Launches ‘Supporters of Chromium-Based Browsers’ https://www.webpronews.com/linux-foundation-launches-supporters-of-chromium-based-browsers/ Mon, 27 Jan 2025 13:00:00 +0000 https://www.webpronews.com/?p=611211 The Linux Foundation has launched the “Supporters of Chromium-Based Browsers,” an effort to further the development of Chromium-based web browsers.

Chromium is the open-source web browser that serves as the basis for Google Chrome, Microsoft Edge, Opera, Brave, and others. While it may be the most popular and widely-used browser code base, Google still accounts for the vast majority of development, as the company says in a blog post.

In 2024, Google made over 100,000 commits to Chromium, accounting for ~94 percent of contributions. While we have no intention of reducing this investment, we continue to welcome others stepping up to invest more.

With the DOJ seeking to force Google to sell Chrome, the Linux Foundation is seeking to foster a healthy Chromium ecosystem.

“With the launch of the Supporters of Chromium-Based Browsers, we are taking another step forward in empowering the open source community,” said Jim Zemlin, executive director of the Linux Foundation. “This project will provide much-needed funding and development support for open development of projects within the Chromium ecosystem.”

The Foundation says the initiative will create a neutral space for Chromium development.

“With the launch of the Supporters of Chromium-Based Browsers, we are taking another step forward in empowering the open source community,” said Jim Zemlin, executive director of the Linux Foundation. “This project will provide much-needed funding and development support for open development of projects within the Chromium ecosystem.”

Google, Meta, Microsoft, Opera and others have joined the initiative, although Brave is notably absent from the list.

  • “With the incredible support of the Linux Foundation, we believe the Supporters of Chromium-Based Browsers is an important opportunity to create a sustainable platform to support industry leaders, academia, developers, and the broader open source community in the continued development and innovation of the Chromium ecosystem,” said Parisa Tabriz, VP, Chrome.
  • “Microsoft is pleased to join this initiative which will help drive collaboration within the Chromium ecosystem. This initiative aligns with our commitment to the web platform through meaningful and positive contributions, engagement in collaborative engineering, and partnerships with the community to achieve the best outcome for everyone using the web,” said Meghan Perez, VP, Microsoft Edge.
  • “As one of the major browsers contributing to the Chromium project, Opera is pleased to join the Supporters of Chromium-Based Browsers and to lend our efforts towards the development of the open-source ecosystem. We look forward to collaborating with members of the project to foster this growth and to keep building innovative and compelling products for all users,” said Krystian Kolondra, EVP Browsers, Opera.
]]>
611211
OpenAI Unveils ‘Operator’: Your New Digital Assistant for Web Tasks https://www.webpronews.com/openai-unveils-operator-your-new-digital-assistant-for-web-tasks/ Fri, 24 Jan 2025 15:45:19 +0000 https://www.webpronews.com/?p=611176 In the hyper-competitive world of artificial intelligence, where the race for the most advanced AI agent is akin to the gold rush of yesteryears, OpenAI has just struck a new vein with the release of “Operator.” This isn’t just another AI tool; it’s your new digital sidekick, capable of navigating the internet and performing tasks for you, from booking travel to managing your online shopping list.

Launched on January 23, 2025, Operator starts its journey as a “research preview” available only to those who subscribe to OpenAI’s ChatGPT Pro tier, a $200 monthly investment into the future of AI interaction. But what does this mean for the average tech-savvy individual or enterprise? It means having an AI that isn’t just about answering questions but acting on them.

The Mechanics of Operator

Operator leverages a novel model called the Computer-Using Agent (CUA), which utilizes the vision capabilities of OpenAI’s GPT-4o model alongside advanced reasoning skills honed by reinforcement learning. This combination allows Operator to “see” websites through screenshots and interact with them via clicks, scrolls, and keystrokes, essentially emulating human navigation of the web.

The CUA model is designed to understand and manipulate graphical user interfaces (GUIs) by interpreting visual cues from browser windows. Here’s a deeper dive for the developers:

  • Vision and Interaction: Operator uses a convolutional neural network (CNN) layer to process visual inputs from screenshots, identifying actionable elements like buttons or text fields. The model then applies a decision-making algorithm, which could be likened to a mix of deep Q-learning for action selection and a transformer-based approach for understanding context.
  • API Integration: While Operator doesn’t rely on traditional APIs for interaction, developers can expect an API release that allows for integration of CUA capabilities into other applications. This API will likely include endpoints for initiating tasks, monitoring progress, and managing session data.
  • Performance Metrics: In benchmarks like OSWorld, where AI models are tested on their ability to mimic human computer use, Operator scored a 38.1%, surpassing competitors like Anthropic’s model but not yet reaching human levels (72.4%). In web navigation tasks, it boasts an 87% success rate on WebVoyager, suggesting robust performance in real-world scenarios.
  • Limitations and Adaptability: Operator’s current limitations include struggles with complex interfaces or tasks requiring nuanced human judgment. However, its design includes mechanisms for learning from user feedback, potentially improving over time through online learning techniques.

Safety in an Autonomous World

With great power comes great responsibility, and OpenAI is acutely aware of this. Operator isn’t given free rein; it operates under stringent safety protocols. For instance, it won’t send emails or alter calendar events without user intervention, aiming to prevent potential misuse or privacy breaches. OpenAI’s safety net includes both automated and human-reviewed monitoring to pause any suspicious activity, reflecting broader concerns about AI autonomy.

  • User Control: Before executing tasks with significant consequences, like making purchases, Operator requests confirmation from the user, ensuring a layer of human oversight.
  • Privacy: Operator’s design includes options to clear browsing data, manage cookies, and opt out of data collection for model improvement, all accessible through a dedicated settings panel.

The Competitive Scene

The tech world isn’t short of AI agents; Anthropic has its “Computer Use” feature, and Google is rumored to be working on similar tech. But Operator’s immediate integration into the ChatGPT ecosystem gives it a head start. The buzz on X has been palpable, with users and tech analysts alike weighing in on its potential. One notable post from

@MatthewBerman highlights, “OpenAI’s first AGENTS are here! ‘Operator’ can control a browser and accomplish real-world tasks on your behalf,” showcasing the community’s excitement and the platform’s capabilities.

Looking Ahead

OpenAI’s move with Operator isn’t just about adding another tool to its belt; it’s about redefining how we interact with technology. The company has teased further integration of Operator’s capabilities across its product lineup, hinting at a future where AI agents handle the mundane, allowing humans to focus on the creative and strategic.

  • Developer Opportunities: With plans to make CUA available through an API, developers can look forward to building applications that leverage Operator’s capabilities for automation in sectors like customer service, e-commerce, and personal productivity.
  • Scalability and Customization: The model’s architecture allows for scaling down to smaller, more specific tasks or scaling up for broader, more complex workflows, offering flexibility for different use cases.

However, the path forward for Operator is dotted with challenges. Adapting to the ever-evolving web, ensuring privacy, and managing the ethical implications of autonomous agents will be critical. Developers and tech enthusiasts are watching closely, eager to see how Operator will evolve, adapt, and perhaps, revolutionize our daily digital interactions.

As we stand on this new frontier, one thing is clear: with Operator, OpenAI isn’t just aiming to assist but to transform our digital lives, one task at a time.

]]>
611176