Connect with us
DAPA Banner

Tech

Xiaomi 17 Series & Pad 8 Confirmed to Launch in India on February 28

Published

on

Xiaomi has officially confirmed that the Xiaomi 17 series will launch on February 28, 2026, at the Mobile World Congress (MWC) in Barcelona. The Chinese smartphone maker will debut the devices in India on the same day as the global unveiling. The company will begin the event at 2 PM Barcelona time (6:30 PM IST) and will likely livestream it for viewers worldwide. Alongside the new smartphones, Xiaomi will also introduce the Xiaomi Pad 8 in India.

A key highlight of the Xiaomi 17 lineup is its upgraded camera partnership with Leica. The companies have shifted from simple collaboration to a strategic co-creation approach. In practical terms, Leica is now more involved in camera design and tuning. Moreover, the goal is to offer users a more refined photography experience with improved lighting, natural colors, and professional-level output.

Xiaomi 17 Ultra

image for Xiaomi 17

The Xiaomi 17 Ultra will be the star of the show. That’s because it’ll come with a massive 6.9-inch AMOLED display with 1.5K resolution, a 120Hz refresh rate, and a brightness of up to 3500 nits. In terms of performance, the Snapdragon 8 Elite Gen 5 chipset will be the beating heart, with variants up to 16GB of RAM and 1TB of internal storage.

Additionally, it includes a 200MP periscope sensor to deliver high-quality zoom shots. The phone also houses a 6800mAh battery with 90W fast charging. The company uses leather and matte finishes in the design, taking inspiration from classic Leica cameras.

Xiaomi 17 & Pad 8

Different color variants of the Xiaomi 17

The Xiaomi 17 will debut alongside the Ultra version at the same event. It features a 6.3-inch AMOLED display and a 50MP triple-camera setup on the back. The phone is powered by the Snapdragon 8 Elite Gen 5 processor and features a 7000mAh battery.

Alongside the smartphones, Xiaomi is expected to introduce the Xiaomi Pad 8 in India. The tablet comes with an 11.2-inch 3.2K LCD screen supporting a 144Hz refresh rate. Furthermore, it runs on the Snapdragon 8 Elite processor and offers up to 16GB RAM. For photography and video calls, it features a 50MP rear camera and a 32MP selfie camera. The device features a 9200mAh battery.

Advertisement

Expected Price

The company has not revealed India-specific pricing so far. In global markets, Xiaomi plans to launch the Xiaomi 17 Ultra at around €1,499 and the regular Xiaomi 17 at about €999. Pricing in India may change depending on local taxes and import costs.

Source link

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

AI training lawsuit drags Apple in yet again for alleged use of pirated book dataset

Published

on

AI training with sketchy data repository “The Pile” returns to the courts in a lawsuit by Chicken Soup for the Soul, LLC accusing just about all of big tech of piracy. The problem is, Apple denies using it to train Apple Intelligence.

Glowing multicolored Siri orb with overlapping light ribbons centered inside a neon gradient atomic-style looped outline that is the Apple Intelligence logo on a solid black background
Apple accused of using ‘The Pile’ for AI training yet again

Artificial intelligence is a term that has virtually lost all meaning because of its being applied to everything. In that sense, it seems a lawsuit has mistakenly included Apple when it has previously denied utilizing the dataset in question.
According to a lawsuit from Chicken Soup for the Soul, LLC, Apple, Meta, xAI, Google, Anthropic, OpenAI, Perplexity, and NVIDIA are all in violation of copyright thanks to training their respective artificial intelligence tools on a dataset known as “The Pile.” While that dataset is filled with proprietary content, like YouTube subtitle files, it wasn’t used by Apple to train Apple Intelligence.
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Tech

ENIAC, the General-Purpose Digital Computer, Is 80

Published

on

Happy 80th anniversary, ENIAC! The Electronic Numerical Integrator and Computer, the first large-scale, general-purpose, programmable electronic digital computer, helped shape our world.

On 15 February 1946, ENIAC—developed in the Moore School of Electrical Engineering at the University of Pennsylvania, in Philadelphia—was publicly demonstrated for the first time. Although primitive by today’s standards, ENIAC’s purely electronic design and programmability were breakthroughs in computing at the time. ENIAC made high-speed, general-purpose computing practicable and laid the foundation for today’s machines.

On the eve of its unveiling, the U.S. Department of War issued a news release hailing it as a new machine “expected to revolutionize the mathematics of engineering and change many of our industrial design methods.” Without a doubt, electronic computers have transformed engineering and mathematics, as well as practically every other domain, including politics and spirituality.

ENIAC’s success ushered the modern computing industry and laid the foundation for today’s digital economy. During the past eight decades, computing has grown from a niche scientific endeavor into an engine of economic growth, the backbone of billion-dollar enterprises, and a catalyst for global innovation. Computing has led to a chain of innovations and developments such as stored programs, semiconductor electronics, integrated circuits, networking, software, the Internet, and distributed large-scale systems.

Advertisement

Inside the ENIAC

The motivation for developing ENIAC was the need for faster computation during World War II. The U.S. military wanted to produce extensive artillery firing tables for field gunners to quickly determine settings for a specific weapon, a target, and conditions. Calculating the tables by hand took “human computers” several days, and the available mechanical machines were far too slow to meet the demand.

In 1942 John Mauchly, an associate professor of electrical engineering at Penn’s Moore School, suggested using vacuum tubes to speed up computer calculations. Following up on his theory, the U.S. Army Ballistic Research Laboratory, which was responsible for providing artillery settings to soldiers in the field, commissioned Mauchly and his colleagues J. Presper Eckert and Adele Katz Goldstine, to work on a new high-speed computer. Eckert was a lab instructor at Moore, and Goldstine became one of ENIAC’s programmers. It took them a year to design ENIAC and 18 months to build it.

The computer contained about 18,000 vacuum tubes, which were cooled by 80 air blowers. More than 30 meters long, it filled a 9 m by 15 m room and weighed about 30 kilograms. It consumed as much electricity as a small town.

Programming the machine was difficult. ENIAC did not have stored programs, so to reprogram the machine, operators manually reconfigured cables with switches and plugboards, a process that took several days.

Advertisement

By the 1950s, large universities either had acquired or built their own machines to rival ENIAC. The schools included Cambridge (EDSAC), MIT (Whirlwind), and Princeton (IAS). Researchers used the computers to model physical phenomena, solve mathematical problems, and perform simulations.

After almost nine years of operation, ENIAC officially was decommissioned on 2 October 1955.

ENIAC in Action: Making and Remaking the Modern Computer, a book by Thomas Haigh, Mark Priestley, and Crispin Rope, describes the design, construction, and testing processes and dives into its afterlife use. The book also outlines the complex relationship between ENIAC and its designers, as well as the revolutionary approaches to computer architecture.

In the early 1970s, there was a controversy over who invented the electronic computer and who would be assigned the patent. In 1973 Judge Earl Richard Larson of U.S. District Court in Minnesota ruled in the Honeywell v. Sperry Rand case that Eckert and Mauchly did not invent the automatic electronic digital computer but instead had derived their subject matter from a computer prototyped in 1939 by John Vincent Atanasoff and Clifford Berry at Iowa State College (now Iowa State University). The ruling granted Atanasoff legal recognition as the inventor of the first electronic digital computer.

Advertisement

IEEE’s ENIAC Milestone

In 1987 IEEE designated ENIAC as an IEEE Milestone, citing it as “a major advance in the history of computing” and saying the machine “established the practicality of large-scale electronic digital computers and strongly influenced the development of the modern, stored-program, general-purpose computer.”

The commemorative Milestone plaque is displayed at the Moore School, by the entrance to the classroom where ENIAC was built.

“The ENIAC legacy heralded the computer age, transforming not only science and industry but also education, research, and human communication and interaction.”

A paper on the machine, published in 1996 in IEEE Annals of the History of Computing and available in the IEEE Xplore Digital Library, is a valuable source of technical information.

Advertisement

The Second Life of ENIAC,” an article published in the annals in 2006, covers a lesser-known chapter in the machine’s history, about how it evolved from a static system—configured and reconfigured through laborious cable plugging—into a precursor of today’s stored-program computers.

A classic history paper on ENIAC was published in the December 1995 IEEE Technology and Society Magazine.

The IEEE Inspiring Technology: 34 Breakthroughs book, published in 2023, features an ENIAC chapter.

The women behind ENIAC

One of the most remarkable aspects of the ENIAC story is the pivotal role women played, according to the book Proving Ground: The Untold Story of the Six Women Who Programmed the World’s First Modern Computer, highlighted in an article in The Institute. There were no “programmers” at that time; only schematics existed for the computer. Six women, known as the ENIAC 6, became the machine’s first programmers.

Advertisement

The ENIAC 6 were Kathleen Antonelli, Jean Bartik, Betty Holberton, Marlyn Meltzer, Frances Spence, and Ruth Teitelbaum.

“These six women found out what it took to run this computer, and they really did incredible things,” a Penn professor, Mitch Marcus, said in a 2006 PhillyVoice article. Marcus teaches in Penn’s computer and information science department.

In 1997 all six female programmers were inducted into the Women in Technology International Hall of Fame, in Los Angeles.

Two other women contributed to the programming. Goldstine wrote ENIAC’s five-volume manual, and Klára Dán von Neumann, wife of John von Neumann, helped train the programmers and debug and verify their code.

Advertisement

To honor the women of ENIAC, the IEEE Computer Society established the annual Computer Pioneer Award in 1981. Eckert and Mauchly were among the award’s first recipients. In 2008 Bartik was honored with the award. Nominations are open to all professionals, regardless of gender.

An ENIAC replica

Last year a group of 80 autistic students, ages 12 to 16, from PS Academy Arizona, in Gilbert, recreated the ENIAC using 22,000 custom parts. It took the students almost six months to assemble.

A ceremony was held in January to display their creation. The full-scale replica features actual-size panels made from layered cardboard and wood. Although all electronic components are simulated, they are not electrically active. The machine, illuminated by hundreds of LEDs, is accompanied by a soundtrack that simulates the deep hum of ENIAC’s transformers and the rhythmic clicking of relays.

A white woman using a computer-adding machine in the 1940\u2019s. The device resembles a bulky typewriter and prints large stacks of paper with tabulated answers.

This machine prints and tabulates the answers to the problems solved by the ENIAC.

Advertisement

Bettmann/Getty Images

“Every major unit, accumulators, function tables, initiator, and master programmer is present and placed exactly where it was on the original machine,” Tom Burick, the teacher who mentored the project, said at the ceremony.

The replica, still on display at the school, is expected to be moved to a more permanent spot in the near future.

Advertisement

ENIAC’s legacy

ENIAC’s significance is both technical and symbolic. Technically, it marks the beginning of the chain of innovations that created today’s computational infrastructure. Symbolically, it made governments, militaries, universities, and industry view computation as a tool for improvement and for innovative applications that had previously been impossible. It marked a tectonic shift in the way humans approach problem-solving, modeling, and scientific reasoning.

The ENIAC legacy heralded the computer age, transforming not only science and industry but also education, research, and human communication and interaction.

As Eckert is reported to have said, “There are two epochs in computer history: Before ENIAC and After ENIAC.”

The remarkable evolution of computer hardware during the past 80 years has been sparked by advances in programming languages—the essential drivers of computing.

Advertisement

From the manual rewiring of ENIAC to the orchestration of intelligent, distributed systems, programming languages have steadily evolved to make computers more powerful, expressive, and accessible.

Predictions for computing in the decades ahead

The evolution of computing will continue along multiple trajectories, with the emphasis moving from generalization to specialization (for AI, graphics, security, and networking), from monolithic system design to modular integration, and from performance-centric metrics alone to energy efficiency and sustainability as primary objectives.

Increasingly, security will be built into hardware by design. Computing paradigms will expand beyond traditional deterministic models to embrace probabilistic, approximate, and hybrid approaches for certain tasks.

Those developments will usher in a new era of computing and a new class of applications.

Advertisement

From Your Site Articles

Related Articles Around the Web

Source link

Advertisement
Continue Reading

Tech

HP Study Finds Many Indian SMBs Still Ignore Printer Security Risks

Published

on

Cybersecurity in 2026 is one of the most pressing issues since everything we interact with is connected to the internet. HP has just released a new report titled The Workflow Wakeup, highlighting how everyday workplace technologies, including printers, can impact cybersecurity in modern organizations. According to the study, 51% of SMBs consider print security a low priority, even as businesses increasingly adopt digital tools and hybrid work environments.

Print Security Still a Blind Spot

The research was based on responses from 200 IT decision-makers and 600 knowledge workers across Indian SMBs with 50 to 1,000 employees.

One of the most notable findings is that employees often underestimate the risks associated with printers connected to office networks. Around 75% of knowledge workers assume network printers are secure, while 48% do not consider printers to be a cybersecurity threat.

At the same time, concerns about document privacy remain significant. Nearly 49% of workers worry about confidential documents being printed and accessed by the wrong person. The study outlines several key risks organizations worry about when it comes to printing infrastructure:

Advertisement
  1. Cybersecurity threats linked to connected printers
  2. Employees mishandling or misprinting sensitive documents
  3. Managing security across multiple printers in an organization
  4. Unauthorized access to print queues or files
  5. Security risks tied to cloud-based scanning workflows

Smart Printing Technology Could Help

A person giving showing someone a printed paper

While the report highlights several challenges, HP also suggests that adopting smarter print management systems can improve security.

Among SMBs that have implemented smart printing technology, 88% reported improved security outcomes. Businesses cited three main benefits:

  • Better visibility into printing and scanning activity (90%)
  • Improved compliance with security standards (85%)
  • Stronger enforcement of printing rules and restrictions (83%)

Source link

Continue Reading

Tech

Prof Lynne Taylor and Dr Sarah O’Keeffe awarded 2026 St Patrick’s Day Medal

Published

on

The Research Ireland St Patrick’s Day Medal honours exceptional academic and industry leaders with strong Irish roots.

Taoiseach Micheál Martin, TD has presented Prof Lynne Taylor, a Retter distinguished professor of pharmacy at Purdue University, and Dr Sarah O’Keeffe, the group vice-president for product research and development at Eli Lilly, with the Research Ireland St Patrick’s Day medal.

The medal is awarded each year to academic and industry leaders with established Irish roots, who from their positions in the US, support and champion Ireland’s research community. Previous winners include computer scientist Dr Eamonn Keogh, Stripe founders John and Patrick Collison and Dr Ann Kelleher

A global authority on drug formulation science, Taylor’s research provides the foundation technologies that support the delivery of life-saving treatments for diseases such as cancer and hepatitis C. An Irish citizen, she is a vocal advocate for Ireland’s pharma space through her advisory roles with the Research Ireland Centre for Pharmaceuticals and collaboration with universities. 

Advertisement

She is also the editor-in-chief of Molecular Pharmaceutics and is committed to supporting other women in STEM via the mentorship of emerging scientists and has built a formidable talent pipeline, with many former group members now holding prominent positions globally.

Commenting on the award, Taylor said: “It is a great honour to receive this award from Ireland’s research and innovation agency. For many years I have been involved with championing Irish research and supporting scientists at every stage of their development, across Ireland and globally. 

“Whether serving as a mentor, adviser, collaborator or guest speaker, these interactions with Irish scientists have been deeply rewarding. It is a privilege to continue playing a role in fostering greater connectivity and knowledge exchange between the United States and Ireland, and I am confident that the long-standing bonds between our two countries will grow even stronger into the future.”

O’Keeffe is considered among one of Ireland’s most senior leaders in global pharmaceutical R&D and she oversees more than 1,000 scientists and engineers who translate discovery molecules into medicines for patients worldwide. She has been central to a number of major advances in drug development, including in the development of the investigational drug candidate orforglipron, which was recognised by Time magazine for its potential global health impact in the management of diabetes and obesity.

Advertisement

Beginning her career with Eli Lilly in Indianapolis, O’Keeffe played a central role in advancing manufacturing capabilities at the company’s Kinsale site, earning the facility the IPSE Global Facility of the Year Award for Innovation in 2017 and is a central figure in the development of the $4.5bn Lilly Medicine Foundry.

Of her win, she said: “I am delighted and proud to receive this recognition from Research Ireland. I would like, firstly, to acknowledge UCC for being the launchpad for my career in industry. I’d also like to thank all my Lilly colleagues in Ireland, United States and internationally over the last two decades, for their extraordinary commitment and relentless pursuit of excellence. 

“Pharmaceutical research endeavours are a team pursuit, and collective passion and perseverance through times of challenge and often, failure is how progress and success happens. It has been a pleasure to have shared my journey to date with such talented colleagues who have the patient front and centre in all that they do.”

Presenting both recipients with their medals in Washington DC, Martin stated: “Today, we honour two outstanding scientific leaders whose achievements exemplify the very best of our global research community. Prof Taylor and Dr O’Keeffe demonstrate how members of the Irish diaspora, working at the highest levels in the United States, are helping to shape the future of medicine and strengthen international partnerships. 

Advertisement

“Their respective work has enhanced Ireland’s reputation as a leader in research and innovation, and reflects both the deep and enduring ties between Ireland and the US, and our shared commitment to scientific excellence. I am delighted to recognise their leadership and achievements here today, and to celebrate the impact they continue to make on behalf of Ireland.”

Updated, 3.35pm, 18 March 2026: This article was amended to clarify that O’Keeffe helped Eli Lilly’s Kinsale site earn the IPSE Global Facility of the Year Award for Innovation.

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Advertisement

Source link

Continue Reading

Tech

CISA orders feds to patch Zimbra XSS flaw exploited in attacks

Published

on

Email

CISA has ordered U.S. government agencies to secure their servers against an actively exploited vulnerability in the Zimbra Collaboration Suite (ZCS).

Zimbra is a very popular email and collaboration software suite used by hundreds of millions of people worldwide, including thousands of businesses and hundreds of government agencies.

Tracked as CVE-2025-66376 and patched in early November, this high-severity security flaw stems from a stored cross-site scripting (XSS) weakness in the Classic UI that remote unauthenticated attackers could exploit by abusing Cascading Style Sheets (CSS) @import directives in email HTML.

While Synacor (the company behind Zimbra) didn’t share any details on the impact of a successful CVE-2025-66376 attack, it can likely be exploited to execute arbitrary JavaScript via malicious HTML-based emails, potentially allowing attackers to hijack user sessions and steal sensitive data within the compromised Zimbra environment.

Advertisement

CISA added it to its catalog of vulnerabilities exploited in the wild on Wednesday and gave Federal Civilian Executive Branch (FCEB) agencies two weeks to secure their servers by April 1st, as mandated by the Binding Operational Directive (BOD) 22-01 issued in November 2021.

Although BOD 22-01 applies only to federal agencies, the U.S. cybersecurity agency encouraged all organizations, including those in the private sector, to patch this actively exploited flaw as soon as possible.

“Apply mitigations per vendor instructions, follow applicable BOD 22-01 guidance for cloud services, or discontinue use of the product if mitigations are unavailable,” CISA warned. “These types of vulnerabilities are frequent attack vectors for malicious cyber actors and pose significant risks to the federal enterprise.”

Zimbra servers under attack

Zimbra security flaws are frequently targeted in attacks and have been exploited to breach thousands of vulnerable email servers worldwide in recent years.

Advertisement

For instance, as early as June 2022, Zimbra auth-bypass and remote code execution bugs were abused to breach more than 1,000 servers.

Starting in September 2022, hackers exploited a zero-day vulnerability in Zimbra Collaboration Suite, breaching nearly 900 servers within two months after gaining remote code execution on compromised instances.

The Russian state-backed Winter Vivern hacking group also used reflected XSS exploits to breach the Zimbra webmail portals of NATO-aligned governments and the mailboxes of government officials, military personnel, and diplomats.

More recently, threat actors exploited another Zimbra XSS vulnerability (CVE-2025-27915) in zero-day attacks to execute arbitrary JavaScript code, enabling them to set email filters that redirect messages to attacker-controlled servers.

Advertisement

Malware is getting smarter. The Red Report 2026 reveals how new threats use math to detect sandboxes and hide in plain sight.

Download our analysis of 1.1 million malicious samples to uncover the top 10 techniques and see if your security stack is blinded.

Source link

Continue Reading

Tech

Inventec’s bizarre VeilBook laptop hides its touchpad under a sliding keyboard just to give cooling fans a little breathing room

Published

on


  • Inventec VeilBook rearranges keyboard and touchpad to prioritize airflow inside thin laptop
  • Sliding keyboard design exposes ventilation openings normally hidden beneath traditional notebook layouts
  • VeilBook’s cooling strategy sacrifices touchpad access during heavier computing workloads

Taiwanese manufacturer Inventec has revealed an experimental laptop called the VeilBook, a concept device built around an unusual keyboard placement and thermal design.

The machine features a 14-inch display and an ultra-thin chassis measuring less than 10mm thick, placing it among the slimmer notebook concepts proposed in recent years.

Source link

Advertisement
Continue Reading

Tech

The Jehovah’s Witnesses Are Back Abusing Copyright Law To Unmask Their Critics. Again.

Published

on

from the dmca-surveillance dept

EFF announced last week that it has stepped in to defend yet another anonymous Jehovah’s Witness critic from having their identity exposed through bogus copyright claims. The Watch Tower Bible and Tract Society — the organizational arm of the Jehovah’s Witnesses — has sent DMCA subpoenas to both Google and Cloudflare seeking information to unmask the anonymous operator of a website called JWS Library. If you’re getting a sense of déjà vu here, that’s because we’ve written about Watch Tower doing this exact thing more than once before, and they keep coming back to the same playbook.

EFF’s client, identified as J. Doe, is a current member of the Jehovah’s Witnesses who got curious about the history of the organization’s public statements and how they’ve changed over time. So Doe did something pretty straightforwardly useful. As EFF explains:

They created research tools to analyze those documents and ultimately created a website, JWS Library, allowing others to use those tools and verify their findings through an archive that included documents suppressed by the church. Doe and others discovered prophecies that failed to come true, erasure of a leader’s disgrace, increased calls for obedience and donations, and other insights about the Jehovah’s Witnesses’ practices. Doe also used machine translation on a foreign-language document to help the community understand what the church was saying to different audiences and also to help understand potential changes in the organization’s attitudes towards dissent.

That’s about as clearly transformative and non-commercial as fair use gets — it’s for research and commentary, after all. But Watch Tower doesn’t care about whether the copyright claim is actually viable. It cares about finding out who Doe is. And everyone involved knows exactly why. Again EFF’s Kit Walsh explains:

Within the church, dissent or even asking questions has often been punished by labeling members as apostates and ostracizing—or “disfellowshipping”— them. As a result, Doe and others choose to speak anonymously to avoid retaliation that could cost them family, friend, and professional relationships.

Watch Tower knows all of this, of course. That’s precisely the point. They’re not sending DMCA subpoenas to Google and Cloudflare because they have a genuine interest in protecting their copyrights — they’re using the subpoena process as a surveillance tool with a built-in punishment mechanism waiting at the other end.

Advertisement

We know this because we’ve watched the pattern play out in extraordinary detail multiple times. When Paul Levy of Public Citizen’s Litigation Group dug into Watch Tower’s history back in 2022, he found that the organization had filed an astounding 72 copyright subpoenas since 2017. And how many of those subpoenas resulted in an actual copyright infringement lawsuit? Essentially zero. As Levy documented:

As can be seen from this list of Watch Tower copyright infringement lawsuits, Watch Tower has never used the information obtained from these subpoenas to file an infringement action. The only infringement lawsuit that Watch Tower has filed against the target of one of its DMCA subpoenas is a current case (discussed below) in which enforcement of the subpoena was denied!

So they file subpoena after subpoena claiming they need to identify alleged infringers to bring a lawsuit, and then they never bring the lawsuit. What they do with the information, as Levy uncovered, is identify critics and then initiate disfellowship proceedings against them. The copyright claim is just the crowbar they use to pry open the door.

The one time Watch Tower actually did file a lawsuit — against a critic using the pseudonym Kevin McFree — things went badly for them. Once a judge started paying close attention to what was actually going on, Watch Tower fled the case, dismissing with prejudice. Among the more remarkable moments in that case: Watch Tower’s counsel tried to claim the organization lacked “significant funds” to pursue litigation — despite Watch Tower’s publicly available tax filings showing it has more than a billion dollars in assets. The organization also tried to use the infringement lawsuit as a vehicle to investigate how McFree had obtained leaked unpublished videos — something that had nothing to do with copyright and everything to do with plugging leaks and identifying internal dissidents.

Which makes the history here so galling. The Jehovah’s Witnesses have one of the most impressive First Amendment track records of any organization in American legal history. Starting with Lovell v. City of Griffin in 1938, they brought a string of landmark cases establishing core free speech protections that benefit all of us today. They fought for the right to go door-to-door without identifying themselves, and against compelled speech. Watch Tower’s own in-house counsel, Paul Polidoro — the same lawyer who has been issuing many of these DMCA subpoenas — successfully argued before the Supreme Court for the right of Jehovah’s Witnesses to speak anonymously.

Advertisement

And now that same organization is systematically using copyright law’s cheapest, lowest-bar procedural tool to strip anonymity from its own members who dare to ask questions. As EFF puts it:

The First Amendment does not permit the unmasking of anonymous speakers based on such weak claims. Indeed, the First Amendment protects anonymous speakers precisely because some would be deterred from speaking if they faced retribution for doing so.

Watch Tower got caught doing this in 2019. They got caught again in 2022 and ran away from court once a judge saw through the scheme. And here they are in 2026, right back at it. There’s no honest way to treat these as isolated incidents — this is a deliberate, ongoing policy of abusing copyright as a weapon against internal dissent. The DMCA subpoena process — designed to be quick and cheap — is working exactly as Watch Tower wants: a low-cost intelligence-gathering operation that most targets can’t afford to fight.

EFF is pushing back, at least. But it shouldn’t require EFF — or, as in the last case, Paul Levy and Public Citizen Litigation Group — to show up every single time before a court will acknowledge that an organization with a billion dollars in assets and a decade-long pattern of filing subpoenas it never converts into actual lawsuits is abusing the process. At some point, courts should be able to connect these dots on their own.

Filed Under: copyfraud, copyright, dmca, fair use, jehovah’s witnesses

Companies: eff, watch tower bible and tract society

Advertisement

Source link

Continue Reading

Tech

Commission says EU Inc will be in place by end of 2026

Published

on

Many activists and lobbyists had called for a European company register as part of EU Inc. Today’s EU legislative proposal has indeed included one.

Today saw the official launch of the EU Inc or ‘28th Regime’ legislative proposal by European Commission president Ursula von der Leyen in Brussels, after it got its first outing at Davos in January. It includes the much requested European company register, despite earlier indications that this would be unwieldy and not be part of the proposal.

“It can still take weeks or even months to set up a company or to start doing business in another country within the single market,” von der Leyen said this morning in Brussels.

“Barriers inside Europe hurt us more than tariffs from the outside. Across our union, entrepreneurs who want to scale up are the first victims of regulatory fragmentation. Instead of one market, they face 27 legal systems and more than 60 national company forms. And the consequences are real.”

Advertisement

“The time and money spent filling paperwork is not spent on creating or innovating,” she said. “Obviously, this must change and fast. And so here comes EU Inc, the 28th regime.”

The EU Inc movement had gathered steam since its launch back in 2024, and the announcement from von der Leyen at the World Economic Forum in Davos was widely celebrated as progress. The initiative launched today includes many of the elements for which the start-up community lobbied hard.

What’s included?

The 48-hour incorporation benchmark – the Holy Grail for many in the European start-up sector – is there, as had been anticipated given it was included in von der Leyen’s Davos speech. Less expected was the confirmation that the proposal includes the EU Business registry for EU Inc companies.

“EU Inc creates a single European company framework,” said von der Leyen. “It is one simple set of rules that works across our entire single market of 450m consumers. It will make it drastically easier to start and to grow a business in Europe. Any entrepreneur will be able to create a company within 48 hours from anywhere in the European Union, fully digitalised for less than €100 and without minimum share capital.

Advertisement

“At the heart of this proposal is one simple principle that says, ‘once only’. Companies will provide their information to public authority, the data one time only, and that information will then be shared automatically between relevant administrations, from business registers to taxes to social security … and this information will be stored and easily accessible in a new EU Business register for EU Inc companies.”

A third element of EU Inc will be around talent, she said.

“Now, with EU Inc, employee stock options will be simpler to offer and easier to manage across borders, so it will help you in companies to compete for the best people, and founders will be able to protect companies and employees from unwanted takeovers,” said von der Leyen.

Finally, she addressed the much-discussed ‘risk factor’. Many in the community had pointed to the lack of a risk culture in Europe, where failure was not recognised as a necessary part of any true start-up ecosystem.

Advertisement

“In business, failure should not be the end of the road,” said von der Leyen. “It should be part of the journey. With EU Inc, we want to reward entrepreneurship and make it less risky, and this is why we will fully digitalise insolvency procedures and introduce a fast-track insolvency process for start-ups so that entrepreneurs can start again more easily.”

She also addressed the concerns of labour activists and trade unions around EU Inc.

“Let me be very clear on one important point. The EU Inc proposal will in every way respect existing social standards and labour law, and this includes all employees’ rights to participate in companies’ boards. This proposal includes strong safeguards to ensure that such rules are applied.”

Boosting EU start-ups and scale-ups

EU-INC, a movement with more than 22,000 signatories including the founders of Stripe and venture capital players from Sequioa to Index, had been running a policy campaign since October 2024 pushing for the creation of the so-called 28th regime, and in 2025 presented legal proposals to the Commission.

Advertisement

DC Cahalane is a venture partner at Sure Valley Ventures. In a SiliconRepublic.com op-ed in September last year, he described EU Inc as “Europe’s greatest opportunity to build a unified tech ecosystem that can compete globally”.

Simon Paris is CEO of Unit4, an Utrecht-headquartered enterprise software company. He told SiliconRepublic.com he is very positive about the potential for Europe to create European software champions, and that he sees EU Inc as a positive step in the right direction.

“Some are saying we are better off focusing efforts elsewhere, as we’re too far behind the US and China,” he said. “I disagree. I would remind critics of Europe’s decision to build Airbus in response to the need for an alternative to Boeing. A collective decision was made to define this as a strategic priority for the region, despite all the risks it entailed. As the Airbus example shows, we have been here before, and we made it happen.”

Capital challenge

Availability of capital remains a major challenge for European scale-ups in comparison to their US and Chinese counterparts, and von der Leyen did address this briefly, saying there are plans afoot to tackle the issue.

Advertisement

“This is only the beginning. We will make it easier for venture capital to flow to businesses,” she said. “This will be done by the savings and investment union. We will explore new possibilities for cross-border telework, for start-ups and scale-ups. And today, we also adopted a recommendation to harmonise the definition of innovative start-ups and scale-ups across Europe so that we can design better policies to help our businesses to grow and to thrive in Europe.”

At a later press conference, Henna Virkkunen, executive vice-president of the European Commission, said the intention was to have the EU Inc regime in place by the end of 2026.

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Advertisement

Source link

Continue Reading

Tech

Meta Is Shutting Down VR Social Platform Horizon Worlds

Published

on

Meta is shutting down its VR social platform Horizon Worlds, which was once a key piece of the pivot to the metaverse. The company said the app will be taken off the Quest store at the end of March, and fully removed from Quest headsets by June 15. After that date, it will shift to a standalone “mobile-only experience.” CNBC reports: The shift for Horizon Worlds, which was once a central part of the company’s push into virtual reality, comes weeks after Meta cut over 1,000 employees from Reality Labs, the unit responsible for the metaverse. […] The social platform has never drawn more than a couple hundred thousand active users a month, CNBC previously reported.

The virtual 3D social network where avatars could interact and play games with other users officially launched in late 2021. It operated exclusively on the Quest VR platform until Meta launched a mobile app version in September 2023. The mobile version of Horizon Worlds was built to provide an entry point for users without VR headsets, functioning similarly to Roblox.

Source link

Continue Reading

Tech

Mistral AI launches Forge to help companies build proprietary AI models, challenging cloud giants

Published

on

Mistral AI on Monday launched Forge, an enterprise model training platform that allows organizations to build, customize, and continuously improve AI models using their own proprietary data — a move that positions the French AI lab squarely against the hyperscale cloud providers in one of the most consequential and least understood markets in enterprise technology.

The announcement caps a remarkably aggressive week for Mistral, which also released its Mistral Small 4 model, unveiled Leanstral — an open-source code agent for formal verification — and joined the newly formed Nvidia Nemotron Coalition as a co-developer of the coalition’s first open frontier base model. Together, these moves paint the picture of a company that is no longer content to compete on model benchmarks alone and is instead racing to become the infrastructure backbone for organizations that want to own their AI rather than rent it.

Forge goes significantly beyond the fine-tuning APIs that Mistral and its competitors have offered for the past year. The platform supports the full model training lifecycle: pre-training on large internal datasets, post-training through supervised fine-tuning, DPO, and ODPO, and — critically — reinforcement learning pipelines designed to align models with internal policies, evaluation criteria, and operational objectives over time.

“Forge is Mistral’s model training platform,” said Elisa Salamanca, head of product at Mistral AI, in an exclusive interview with VentureBeat ahead of the launch. “We’ve been building this out behind the scenes with our AI scientists. What Forge actually brings to the table is that it lets enterprises and governments customize AI models for their specific needs.”

Advertisement

Why Mistral says fine-tuning APIs are no longer enough for serious enterprise AI

The distinction Mistral is drawing — between lightweight fine-tuning and full-cycle model training — is central to understanding why Forge exists and whom it serves.

For the past two years, most enterprise AI adoption has followed a familiar pattern: companies select a general-purpose model from OpenAI, Anthropic, Google, or an open-source provider, then apply fine-tuning through a cloud API to adjust the model’s behavior for a narrow set of tasks. This approach works well for proof-of-concept deployments and many production use cases. But Salamanca argues that it fundamentally plateaus when organizations try to solve their hardest problems.

“We had a fine-tuning API relying on supervised fine-tuning. I think it was kind of what was the standard a couple of months ago,” Salamanca told VentureBeat. “It gets you to a proof-of-concept state. Whenever you actually want to have the performance that you’re targeting, you need to go beyond. AI scientists today are not using fine-tuning APIs. They’re using much more advanced tools, and that’s what Forge is bringing to the table.”

What Forge packages, in Salamanca’s telling, is the training methodology that Mistral’s own AI scientists use internally to build the company’s flagship models — including data mixing strategies, data generation pipelines, distributed computing optimizations, and battle-tested training recipes. She drew a sharp line between Forge and the open-source tools and community tutorials that are freely available today.

Advertisement

“There’s no platform out there that provides you real-world training recipes that work,” Salamanca said. “Other open-source repositories or other tools can give you generic configurations or community tutorials, but they don’t give you the recipe that’s been validated — that we’ve been doing for all of our flagship models today.”

From ancient manuscripts to hedge fund quant languages, early customers reveal what off-the-shelf AI can’t do

The obvious question facing any product like Forge is demand. In a market where GPT-5, Claude, Gemini, and a growing fleet of open-source models can handle an enormous range of tasks, why would an enterprise invest the time, compute, and expertise required to train its own model from scratch?

Salamanca acknowledged the question head-on but argued that the need emerges quickly once companies move beyond generic use cases. “A lot of the existing models can get you very far,” she said. “But when you’re looking at what’s going to make you competitive compared to your competition — everyone can adopt and use the models that are out there. When you want to go a step beyond that, you actually need to create your own models. You need to leverage your proprietary information.”

The real-world examples she cited illustrate the edges of the current model ecosystem. In one case, Mistral worked with a public institution that had ancient manuscripts with missing text from damaged sections. “The models that were available were not able to do this because they’ve never seen the data,” Salamanca explained. “Digitization was not very good. There were some unique patterns and characters, and so we actually created a model for them to fill in the spans. This is now used by their researchers, and it’s accelerating their publication and understanding of these documents.”

Advertisement

In another engagement, Mistral partnered with Ericsson to customize its Codestral model for legacy-to-modern code translation. Ericsson, Salamanca said, has built up half a decade of proprietary knowledge around an internal calling language — a codebase so specialized that no off-the-shelf model has ever encountered it. “The concrete impact is like turning a year-long manual migration process, where each engineer needs six months of onboarding, to something that’s really more scalable and faster,” she said.

Perhaps the most telling example involves hedge funds. Salamanca described working with financial firms to customize models for proprietary quantitative languages — the kind of deeply guarded intellectual property that these firms keep on-premises and never expose to cloud-hosted AI services. Using Forge’s reinforcement learning capabilities, Mistral helped one hedge fund develop custom benchmarks and then trained the model to outperform on them, producing what Salamanca called “a unique model that was able to give them the competitive edge that was needed.”

How Forge makes money: license fees, data pipelines, and embedded AI scientists

Forge’s business model reflects the complexity of enterprise model training. According to Salamanca, it operates across several revenue streams. For customers who run training jobs on their own GPU clusters — a common requirement in highly regulated or IP-sensitive industries — Mistral does not charge for compute. Instead, the company charges a license fee for the Forge platform itself, along with optional fees for data pipeline services and what Mistral calls “forward-deployed scientists” — embedded AI researchers who work alongside the customer’s team.

“No competitor out there today is kind of selling this embedded scientist as part of their training platform offering,” Salamanca said.

Advertisement

This model has clear echoes of Palantir’s early playbook, where forward-deployed engineers served as the critical bridge between powerful software and the messy reality of enterprise data. It also suggests that Mistral recognizes a fundamental truth about the current state of enterprise AI: the technology alone is not enough. Most organizations lack the internal expertise to design effective training recipes, curate data at scale, or navigate the treacherous optimization landscape of distributed GPU training.

The infrastructure itself is flexible. Training can happen on Mistral’s own clusters, on Mistral Compute (the company’s dedicated infrastructure offering), or entirely on-premises within the customer’s own data centers. “We have all these different cases, and we support everything,” Salamanca said.

Keeping proprietary data off the cloud is Forge’s sharpest selling point

One of the sharpest points of differentiation Mistral is pressing with Forge is data privacy. When customers train on their own infrastructure, Salamanca emphasized that Mistral never sees the data at all.

“It’s on their clusters, it’s with their data — we don’t see anything of it, and so it’s completely under their control,” she said. “I think this is something that sets us apart from the competition, where you actually need to upload your data, and you have a black box effect.”

Advertisement

This matters enormously in sectors like defense, intelligence, financial services, and healthcare, where the legal and reputational risks of exposing proprietary data to a third-party cloud service can be deal-breakers. Mistral has already partnered with organizations including ASML, DSO National Laboratories Singapore, the European Space Agency, Home Team Science and Technology Agency Singapore, and Reply — a roster that suggests the company is deliberately targeting the most data-sensitive corners of the enterprise market.

Forge also includes data pipeline capabilities that Mistral has developed through its own model training: data acquisition, curation, and synthetic data generation. “Data is a critical piece of any training job today,” Salamanca said. “You need to have good data. You need to have a good amount of data to make sure that the model is going to be good performing. We’ve acquired, as a company, really great knowledge building out these data pipelines.”

In the age of AI agents, Mistral argues that custom models still matter more than MCP servers

The timing of Forge’s launch raises an important strategic question. The AI industry in 2026 has been consumed by agents — autonomous AI systems that can use tools, navigate multi-step workflows, and take actions on behalf of users. If the future belongs to agents, why does the underlying model matter? Can’t companies simply plug into the best available frontier model through an MCP server or API and focus their energy on orchestration?

Salamanca pushed back on this framing with conviction. “The customers that we’ve been working on — some of these specific problems are things that no MCP server would ever solve,” she said. “You actually need that intelligence. You actually need to create that model that will help you solve your most critical business problem.”

Advertisement

She also argued that model customization is essential even in purely agentic architectures. “There are some agentic behaviors that you need to bring to the model,” Salamanca said. “It can be about reasoning patterns, specific types of documentation, making sure that you have the right reasoning traces. Even in these cases where people are going completely agentic, you still need model customization — like reinforcement learning techniques — to actually get the right level of performance.”

Mistral’s press release makes this connection explicit, arguing that custom models make enterprise agents more reliable by providing deeper understanding of internal environments: more precise tool selection, more dependable multi-step workflows, and decisions that reflect internal policies rather than generic assumptions.

The platform also supports an “agent-first” design philosophy. Forge exposes interfaces that allow autonomous agents — including Mistral’s own Vibe coding agent — to launch training experiments, find optimal hyperparameters, schedule jobs, and generate synthetic data. “We’ve actually been building Forge in an AI-native way,” Salamanca said. “We’re already testing out how autonomous agents can actually launch training experiments.”

Mistral Small 4, Leanstral, and the Nvidia coalition: the week that redefined the company’s ambitions

To fully appreciate Forge’s significance, it helps to view it alongside the other announcements Mistral made in the same week — a barrage of releases that together represent the most ambitious expansion in the company’s short history.

Advertisement

Just yesterday, Mistral released Leanstral, the first open-source code agent for Lean 4, the proof assistant used in formal mathematics and software verification. Leanstral operates with just 6 billion active parameters and is designed for realistic formal repositories — not isolated math competition problems. On the same day, Mistral launched Mistral Small 4, a mixture-of-experts model with 119 billion total parameters but only 6 billion active per query, running 40 percent faster than its predecessor while handling three times more queries per second. Both models ship under the Apache 2.0 license — the most permissive open-source license in wide use.

And then there is the Nvidia Nemotron Coalition. Announced at Nvidia’s GTC conference, the coalition is a first-of-its-kind collaboration between Nvidia and a group of AI labs — including Mistral, Perplexity, LangChain, Cursor, Black Forest Labs, Reflection AI, Sarvam, and Thinking Machines Lab — to co-develop open frontier models. The coalition’s first project is a base model co-developed specifically by Mistral AI and Nvidia, trained on Nvidia DGX Cloud, which will underpin the upcoming Nvidia Nemotron 4 family of open models.

“Open frontier models are how AI becomes a true platform,” said Arthur Mensch, cofounder and CEO of Mistral AI, in Nvidia’s announcement. “Together with Nvidia, we will take a leading role in training and advancing frontier models at scale.”

This coalition role is strategically significant. It positions Mistral not merely as a consumer of Nvidia’s compute infrastructure but as a co-creator of the foundational models that the broader ecosystem will build upon. For a company that is still a fraction of the size of its American competitors, this is an outsized seat at the table.

Advertisement

Forge takes aim at Amazon, Microsoft, and Google — and says they can’t go deep enough

Forge enters a market that is already crowded — at least on the surface. Amazon Bedrock, Microsoft Azure AI Foundry, and Google Cloud Vertex AI all offer model training and customization capabilities. But Salamanca argued that these offerings are fundamentally limited in two respects.

First, they are cloud-only. “In one set of cases, it’s very easy to answer — they want to run this on their premises, and so all these tools that are available on the cloud are just not available for them,” Salamanca said. Second, she argued that the hyperscalers’ training tools largely offer simplified API interfaces that don’t provide the depth of control that serious model training requires.

There is also the dependency question. Salamanca described digital-native companies that had built products on top of closed-source models, only to have a new model release — more verbose than its predecessor — crash their production pipelines. “When you’re relying on closed-source models, you are also super dependent on the updates of the model that have side effects,” she warned.

This argument resonates with the broader sovereignty narrative that has powered Mistral’s rise in Europe and beyond. The company has positioned itself as the alternative for organizations that want to own their AI stack rather than lease it from American hyperscalers. Forge extends that argument from inference to training: not just running models you own, but building them in the first place.

Advertisement

The open-source foundation matters here, too. Mistral has been releasing models under permissive licenses since its founding, and Salamanca emphasized that the company is building Forge as an open platform. While it currently works with Mistral’s own models, she confirmed that support for other open-source architectures is planned. “We’re deeply rooted into open source. This has been part of our DNA since the beginning, and we have been building Forge to be an open platform — it’s just a question of a matter of time that we’ll be opening this to other open-source models.”

A co-founder’s departure to xAI underscores why Mistral is turning expertise into a product

The timing of Forge’s launch also arrives against a backdrop of fierce talent competition. As FinTech Weekly reported on March 14, Devendra Singh Chaplot — a co-founder of Mistral AI who headed the company’s multimodal group and contributed to training Mistral 7B, Mixtral 8x7B, and Mistral Large — left to join Elon Musk’s xAI, where he will work on Grok model training. Chaplot had previously also been a founding member of Thinking Machines Lab, the AI startup founded by former OpenAI CTO Mira Murati.

The loss of a co-founder is never insignificant, but Mistral appears to be compensating with institutional capability rather than individual brilliance. Forge is, in essence, a productization of the company’s collective training expertise — the recipes, the pipelines, the distributed computing optimizations — in a form that can scale beyond any single researcher. By packaging this knowledge into a platform and pairing it with forward-deployed scientists, Mistral is attempting to build a durable competitive asset that doesn’t walk out the door when a key hire departs.

Mistral’s big bet: the companies that own their AI models will be the ones that win

Forge is a bet on a specific theory of the enterprise AI future: that the most valuable AI systems will be those trained on proprietary knowledge, governed by internal policies, and operated under the organization’s direct control. This stands in contrast to the prevailing paradigm of the past two years, in which enterprises have largely consumed AI as a cloud service — powerful but generic, convenient but uncontrolled.

Advertisement

The question is whether enough enterprises will be willing to make the investment. Model training is expensive, technically demanding, and requires sustained organizational commitment. Forge lowers the barriers — through its infrastructure automation, its battle-tested recipes, and its embedded scientists — but it does not eliminate them.

What Mistral appears to be banking on is that the organizations with the most to gain from AI — the ones sitting on decades of proprietary knowledge in highly specialized domains — are precisely the ones for whom generic models are least sufficient. These are the companies where the gap between what a general-purpose model can do and what the business actually needs is widest, and where the competitive advantage of closing that gap is greatest.

Forge supports both dense and mixture-of-experts architectures, accommodating different trade-offs between performance, cost, and operational constraints. It handles multimodal inputs. It is designed for continuous adaptation rather than one-time training, with built-in evaluation frameworks that let enterprises test models against internal benchmarks before production deployment.

For the past two years, the enterprise AI playbook has been straightforward: pick a model, call an API, ship a feature. Mistral is now asking a harder question — whether the organizations willing to do the difficult, expensive, unglamorous work of training their own models will end up with something the API-callers never get.

Advertisement

An unfair advantage.

Source link

Continue Reading

Trending

Copyright © 2025