Connect with us
DAPA Banner

Tech

Microsoft to shut down Exchange Online EWS in April 2027

Published

on

Exchange

Microsoft announced today that the Exchange Web Services (EWS) API for Exchange Online will be shut down in April 2027, after nearly 20 years.

EWS is a cross-platform API for developing apps that can access Exchange mailbox items, such as email messages, meetings, and contacts, retrieved from various sources, including Exchange Online and on-premises editions of Exchange (starting with Exchange Server 2007).

Microsoft will begin blocking Exchange Online EWS by default on October 1, 2026, but administrators can temporarily maintain access via an application allowlist. The final shutdown occurs on April 1, 2027, with no exceptions granted.

Wiz

Administrators who create allow lists and configure settings by the end of August 2026 will be excluded from the automatic October blocking. Starting in September 2026, Microsoft will pre-populate allow lists for organizations that have not created their own, based on each tenant’s usage patterns.

The company may also conduct temporary “scream tests” that temporarily disable EWS to expose hidden dependencies before the final cutoff, and will keep IT admins informed via monthly Message Center notifications that provide tenant-specific reminders and usage summaries.

Advertisement

​However, it’s important to note that this retirement process affects only Microsoft 365 and Exchange Online environments, and EWS will continue to function in on-premises Exchange Server installations.

EWS retirement timeline
EWS retirement timeline (Microsoft)

“Today we’re announcing we will use a phased, admin controllable disablement plan that starts in October 2026 and concludes with a complete shutdown of EWS in 2027,” the Exchange Team said on Thursday. “EWS was built nearly 20 years ago, and while it served the ecosystem well, it no longer aligns with today’s security, scale, or reliability requirements.”

Microsoft also advised developers using the EWS API to switch to the Microsoft Graph API until EWS is retired, since it has reached near-complete feature parity with EWS for most scenarios.

“EWS is not being retired on-prem. Hybrid scenarios vary depending on how apps access data. On-prem mailboxes may continue using EWS; cloud mailboxes must move to Graph. Autodiscover will help apps determine mailbox location automatically,” Microsoft added.

“But note that only Exchange SE will support Graph for calls to Exchange Online, so hybrid customers will have to use Exchange SE to host on-premises mailboxes.”

Advertisement

Today’s announcement comes after Microsoft revealed in September 2023 that it planned to begin retiring the EWS API in October 2026, and after a 2018 warning that EWS would stop receiving functionality updates.

Three years later, in October 2021, Microsoft revealed that it had deprecated the 25 least-used EWS APIs for Exchange Online and removed support for them in March 2022 for security reasons.

Modern IT infrastructure moves faster than manual workflows can handle.

In this new Tines guide, learn how your team can reduce hidden manual delays, improve reliability through automated response, and build and scale intelligent workflows on top of tools you already use.

Source link

Advertisement
Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Blocking The Internet Archive Won’t Stop AI, But It Will Erase The Web’s Historical Record

Published

on

from the willingly-burning-libraries dept

Imagine a newspaper publisher announcing it will no longer allow libraries to keep copies of its paper. 

That’s effectively what’s begun happening online in the last few months. The Internet Archive—the world’s largest digital library—has preserved newspapers since it went online in the mid-1990s. The Archive’s mission is to preserve the web and make it accessible to the public. To that end, the organization operates the Wayback Machine, which now contains more than one trillion archived web pages and is used daily by journalists, researchers, and courts.

But in recent months The New York Times began blocking the Archive from crawling its website, using technical measures that go beyond the web’s traditional robots.txt rules. That risks cutting off a record that historians and journalists have relied on for decades. Other newspapers, including The Guardian, seem to be following suit. 

For nearly three decades, historians, journalists, and the public have relied on the Internet Archive to preserve news sites as they appeared online. Those archived pages are often the only reliable record of how stories were originally published. In many cases, articles get edited, changed, or removed—sometimes openly, sometimes not. The Internet Archive often becomes the only source for seeing those changes. When major publishers block the Archive’s crawlers, that historical record starts to disappear.

Advertisement

The Times says the move is driven by concerns about AI companies scraping news content. Publishers seek control over how their work is used, and several—including the Times—are now suing AI companies over whether training models on copyrighted material violates the law. There’s a strong case that such training is fair use

Whatever the outcome of those lawsuits, blocking nonprofit archivists is the wrong response. Organizations like the Internet Archive are not building commercial AI systems. They are preserving a record of our history. Turning off that preservation in an effort to control AI access could essentially torch decades of historical documentation over a fight that libraries like the Archive didn’t start, and didn’t ask for. 

If publishers shut the Archive out, they aren’t just limiting bots. They’re erasing the historical record. 

Archiving and Search Are Legal 

Making material searchable is a well-established fair use. Courts have long recognized it’s often impossible to build a searchable index without making copies of the underlying material. That’s why when Google copied entire books in order to make a searchable database, courts rightly recognized it as a clear fair use. The copying served a transformative purpose: enabling discovery, research, and new insights about creative works. 

Advertisement

The Internet Archive operates on the same principle. Just as physical libraries preserve newspapers for future readers, the Archive preserves the web’s historical record. Researchers and journalists rely on it every day. According to Archive staff, Wikipedia alone links to more than 2.6 million news articles preserved at the Archive, spanning 249 languages. And that’s only one example. Countless bloggers, researchers, and reporters depend on the Archive as a stable, authoritative record of what was published online.

The same legal principles that protect search engines must also protect archives and libraries. Even if courts place limits on AI training, the law protecting search and web archiving is already well established.

The Internet Archive has preserved the web’s historical record for nearly thirty years. If major publishers begin blocking that mission, future researchers may find that huge portions of that historical record have simply vanished. There are real disputes over AI training that must be resolved in courts. But sacrificing the public record to fight those battles would be a profound, and possibly irreversible, mistake. 

Republished from the EFF’s Deeplinks blog.

Advertisement

Filed Under: ai, archives, copyright, culture, fair use, history

Companies: internet archive, ny times, the guardian

Source link

Advertisement
Continue Reading

Tech

AI amplifies whatever you feed it, including confusion

Published

on

Most organizations are not failing at AI because of technology. They are failing because they do not know which data actually matters, and they are scaling that confusion faster than ever. At a time when investment continues to surge, the expectation is that more intelligence will naturally follow. Instead, many teams are finding themselves overwhelmed. The issue is the inability to distinguish between signal and noise in a way that leads to confident decisions. 

The broader landscape makes this tension hard to ignore. According to the State of Enterprise AI 2026, global spending is projected to reach $2.52 trillion, yet only 14% of CFOs report measurable returns. At the same time, 42% of companies abandoned most of their AI pilots in 2025. These point to a systemic disconnect between ambition and execution. As boards demand accountability and leaders look for proof of value, many organizations are confronting a difficult reality: they invested in capability without first ensuring clarity. 

The usual explanation is that the data is not clean enough. That is not wrong, but it misses something more fundamental. Clean data has limited value if it is not relevant, connected, or usable in the context of real decisions. Over time, organizations have accumulated dashboards, reports, and tracking systems that create the appearance of visibility while leaving critical questions unresolved. Teams often cannot explain why a metric moves, how it connects to outcomes, or what action should follow. That gap between information and understanding is where progress stalls.

Part of the problem is scale. The volume of data has expanded faster than the systems used to interpret it. Teams track what they can, often without a clear view of why it matters, and the result is an environment filled with metrics that compete for attention. Definitions vary across departments, events are recorded inconsistently, and reporting relies on manual interventions that introduce further distortion. In that environment, it becomes difficult to form a single, coherent narrative. People operate from fragments, and those fragments rarely align. 

Advertisement

This fragmentation becomes more consequential as AI is introduced into the workflow. Systems trained on inconsistent inputs do not resolve ambiguity; they extend it. According to a report, 61% of data leaders say better data quality is helping move AI initiatives into production, yet 50% still identify data quality and retrieval as major barriers. There is also a concerning dynamic emerging around trust. While 65% of leaders believe employees trust the data used for AI, 75% acknowledge gaps in data literacy. That combination creates a situation where decisions are made with confidence but not necessarily with understanding.

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

There is a belief in some circles that better tools will eventually close this gap. We have seen the opposite. Organizations struggle because their operational systems were never designed to produce reliable signals. When processes are inconsistent, ownership is unclear, and metrics are loosely defined, the data generated from those systems reflects that ambiguity. Signals, which are meant to guide decisions and automation, end up reflecting fragmented realities instead of coherent ones. The outcome is hesitation and misalignment.

Advertisement

The effects show up in subtle but persistent ways. Teams spend more time reconciling numbers than acting on them. Leaders request additional reporting to compensate for uncertainty, which adds more layers without resolving the underlying issue. Priorities shift based on partial views of performance, and coordination across functions becomes more difficult. Over time, this erodes confidence, not just in the data, but in the systems that produce it. The organization moves, but without a shared understanding of direction.

A useful way to think about this is through navigation. Having more instruments in a cockpit does not guarantee a better flight if those instruments are not calibrated to the same reality. Pilots rely on a small number of trusted signals that are consistently defined and clearly understood. In many organizations, the opposite is true. There is an abundance of instrumentation, but little agreement on which signals matter or how they should be interpreted. The result is constant adjustment without meaningful progress. 

The urgency of this issue is reflected in broader research. A report shows that improving data governance has become a top priority for over 40% of leaders, even surpassing some AI-specific initiatives. The reasoning is straightforward: AI and automation amplify the condition of the data they rely on. When that condition is poor, the impact grows quickly, affecting both operational performance and strategic outcomes. This is a question of how organizations define, manage, and use information in practice.

Addressing this requires a shift in focus. The goal is to build more sophisticated dashboards. It is to establish clarity around what decisions need to be made and what information is required to support them. That begins with defining ownership so that data is tied to accountability. It involves standardizing processes so that events are captured consistently across teams. It requires designing metrics that reflect how work actually happens, not just how it is reported. And it depends on building a data layer that brings these elements together into a coherent, usable view. 

Advertisement

Even more important is the human dimension and understanding how people actually work in their day-to-day. Without that capability, even well-structured data will fall short of its potential. People need to understand not just how to access information, but how to apply it in the context of daily decisions. This is where change management becomes critical. It is the ability to help teams separate meaningful signals from background noise and to act with confidence based on that distinction. 

For those trying to move forward, there is a practical starting point that often gets overlooked. Identify the questions that are difficult to answer today. These are usually the questions that require excessive effort, multiple sources, or reliance on individual knowledge. They reveal where the gaps exist in how information is captured and structured. Once those gaps are visible, it becomes possible to design systems that address them directly, focusing on relevance and usability instead of volume.

AI will continue to advance, and its potential remains significant. But its effectiveness will always depend on the environment it operates within. Organizations that invest in clarity, clear processes, clear ownership, and clear signals will find that technology enhances their capabilities. Those who do not will continue to struggle, regardless of how advanced their tools become. The difference comes down to discernment and whether it is treated as a priority or an afterthought.

Advertisement

Source link

Continue Reading

Tech

Looking At A Bike Built For The Apocalypse

Published

on

So-called bug out cars are a rather silly venture that serve little purpose more than snagging your jumper. The odds of a car working well through a nuclear winter are rather minimal. But what about a bicycle? On paper it’s a better choice, with extreme efficiency, reliability, and runs off whatever sustenance you can find in the barren landscape of a collapsed society. But [Seth] over at Berm Peak proved an apocalypse bike is at least as silly as a bug out car.

While a utilitarian bike fit for a cross-country trek across a nuclear wasteland can certainly be a reasonable venture, this particular bicycle is not that. This three wheeled monstrosity of a bicycle (is it still a bicycle if it has three wheels?) was built by [TOMO] for the Bespoked bike show’s apocalypse buildoff. It placed second among a number of strange bikes with features ranging from pedal driven circular saws to beer keg grills. But this particular example of apocalypse bike is easily the strangest example of the lot.

The features on this custom build are rather extensive, but the star of the show is the trailing link two wheel drive rear end. The third wheel was thrown on last minute with a random shock providing some measure of compliance to the rather unwieldy system. But while adding unnecessary complexity, the third wheel does offer the benefit of bringing along a number of spare parts on the last bikepacking trip of a lifetime. Moreover, it can be easily removed to get something resembling bicycle.

Advertisement

The aforementioned front of the bike while being an actual bike, is likewise a rather strange build. It’s best described as a fat-tired long nosed tall cargo bike. The removable cargo rack is quite effective in storing heavy loads by keeping the center of gravity near or below the axles, it can remain rideable with quite heavy loads. But, if ground clearance is needed, then simply remove the cargo rack, and the bike becomes a bike capable of navigating the nuclear wasteland it was made for.

While this is a silly and questionable bike, it’s certainly not the first strange bike we have seen.

 

Advertisement

 

Source link

Advertisement
Continue Reading

Tech

Can A $40 Knockoff DeWalt Chainsaw Beat The $130 Original? This Test Found Out

Published

on





A range of brands make cordless pruning saws, and they’re potentially a very handy addition to your arsenal of outdoor tools. Much like its long-standing rivals Milwaukee and Makita, DeWalt makes a pruning saw that has been copied by knockoff brands that use the same interchangeable batteries as the real thing. A test by Project Farm put all of these saws to the test to see how they performed in a variety of situations, and surprisingly, the knockoff versions of some big-name tools performed impressively well, with the knockoff DeWalt saw not far behind its legitimate counterpart.

Advertisement

In a test that timed how long each saw took to cut through 2×8 lumber, the knockoff DeWalt couldn’t match the real DeWalt saw, with the former taking 5.72 seconds while the latter took just 2.87 seconds. However, the knockoff still outperformed a genuine Makita saw, which took 5.93 seconds to make the same cut.

Another test was designed to see how much downward force each saw could take without stalling. The knockoff DeWalt stalled at 10 pounds, and the real DeWalt stalled at 21 pounds, comfortably beating its counterfeit counterpart. However, Ryobi and Craftsman’s saws both outperformed the real DeWalt, hitting over 30 pounds of downward force before stalling, while the Milwaukee saw that Project Farm tested hit 94 pounds and still didn’t stall, making it the winner by a large margin.

Advertisement

In one key area, the knockoff DeWalt beat the real thing

In terms of pure performance, the real DeWalt ranked mostly above its knockoff counterpart, but in efficiency, the knockoff claimed a surprise victory. Project Farm calculated the runtime per amp-hour of each saw, and the real DeWalt managed 1.38 minutes, the second worst of the test group. Meanwhile, the knockoff DeWalt could run for 1.72 minutes with the same amount of power.

The knockoff could also make significantly more cuts through a hardwood log per amp hour, achieving 10.4 cuts compared to the real DeWalt’s 8.1 cuts. However, both were far behind the best in class, with the Milwaukee saw delivering 41.5 cuts per amp hour. Kobalt took the second-place spot with 31.7 cuts per amp hour. Project Farm’s final combined ranking saw the knockoff DeWalt finish only one place behind the legitimate DeWalt saw, although both were roundly beaten by rivals from Milwaukee, Kobalt, and Ryobi. That might seem surprising considering the major price difference between the two.

However, despite their close ranking in the test, you still probably shouldn’t buy knockoff DeWalt tools. Their lack of warranty and inconsistent production standards can potentially mean you end up spending more money in the long run, and in some cases, knockoffs may even pose a safety risk. Buyers looking for the best-performing pruning chainsaw would be better off considering a rival tool from another major chainsaw brand, or coughing up the cash for the real DeWalt saw, even if it isn’t the best in class.

Advertisement



Source link

Advertisement
Continue Reading

Tech

The consequential AI work that actually moves the needle for enterprises

Published

on

Presented by OutSystems


After two years of flashy AI demos, rushed agent prototypes, and breathless predictions, enterprise technology leaders are striking a more pragmatic tone in 2026. In a recent webinar hosted by OutSystems, a panel of software executives and enterprise practitioners made the case that the most consequential AI work happening now is focused on the practical matters of governance, orchestration, and iteration, along with integrating agents into the systems they’ve spent decades building.

Enterprise leaders are increasingly focused on fundamentals. The priority is using new AI technologies

to accelerate productivity, improve delivery, and produce measurable business results.

Advertisement

Three elements shape this work:

  • The move from AI agent prototypes to agentic systems that deliver measurable ROI in production

  • The growing role of enterprise platforms in governing, orchestrating, and scaling AI agents safely

  • The rise of the generalist developer and enterprise architect as the most valuable technical profiles in an era of AI-generated code

Against this backdrop, the panel discussed governance frameworks, the economics of enterprise AI, and the limits of large language models without orchestration. The conversation ultimately turned to how leading organizations are building multi-agent systems grounded in existing enterprise data and workflows.

Agents in the real world

Enabling agents to work in production across the enterprise is best accomplished with a unified platform that handles development, iteration, and deployment. And that’swhere capabilities like the Agent Workbench in the OutSystems platform matter, said Rajkiran Vajreshwari, senior manager of app development at Thermo Fisher Scientific. It provides the infrastructure to learn, iterate, and govern agents at scale.

His team at Thermo Fisher has moved away from single-task AI assistants in customer service to building a coordinated team of specialized agents using the workbench. When a support case arrives, a triage assistant classifies the request and dynamically routes it to the right specialist agent, whether that’s an intent and priority agent, a product context agent, a troubleshooting agent, or a compliance agent.

Advertisement

“We don’t have to think about what will work and how. It’s all pre-built,” he explained. “Each agent has a narrow role and clear guardrails. They stay accurate and auditable.”

Governing the risks of shadow AI

A new category of risk emerges when AI makes it possible for anyone in a company to generate production-level code without IT oversight. Basically, this is ungoverned shadow AI. These homegrown products are prone to hallucinations, data leakage, policy violations, model drift, and agents taking actions that were never formally approved.

To get ahead of the risk, leading organizations need to do three things, said Luis Blando, CPTO of OutSystems.

“Give users guardrails. They’re going to use AI whether you like it or not. Companies that seem to be getting ahead are using AI to govern AI across their full portfolio,” he explained. “That is the difference between shadow AI chaos and enterprise-grade scale.”

Advertisement

Eric Kavanagh, CEO of The Bloor Group, noted that governance requires a layered set of disciplines that includes securing data, monitoring models for drift, and making deliberate choices about where AI connects to existing business processes.

“Companies don’t have to be manually creating these controls,” he added. “A lot of those guardrails and levers are baked in to platforms like OutSystems.”

Why the real orchestration challenge is models vs. platforms

Much of the early excitement around enterprise AI focused on selecting the right large language model. Now the harder challenge, and far more durable source of value, is orchestration. This includes routing tasks, coordinating workflows, governing execution, and integrating AI into existing enterprise systems.

Scott Finkle, VP of development at McConkey Auction Group, noted that LLMs, however impressive, are pieces of complex workflows, not final solutions. Organizations should be ready to hot-swap between Gemini, ChatGPT, Claude, and whatever emerges next without having to rebuild the agentic system around it.

Advertisement

A platform with orchestration capabilities makes that possible. It manages the lifecycle, provides visibility, and ensures processes execute reliably, even as AI handles the reasoning layer on top.

“The AI and the models change, the workflows can change, but the orchestration remains the same,” Finkle said. “That’s how we’re going to extract value out of AI.”

The economics of enterprise AI investing

Security, compliance, governance, and platform-level AI capabilities will all command greater investment in 2026, particularly as AI moves into core workflows like finance and supply chain. Enterprises should favor incremental wins rather than expect big, immediate gains.

“We’re focusing on base hits,” Finkle said. “The way it counts is by getting something into production and having it make an impact. Big investments in pilot projects that don’t make it into production don’t save any money. It’s not going to happen overnight, but over time I think we’ll see tremendous savings.”

Advertisement

There’s still a split in how enterprises are approaching AI transformation. Some start from scratch and reimagine every process. Others, especially those with billions of dollars in existing infrastructure depreciating in-house, want AI to integrate with their systems. They want agentic systems to reuse data, APIs, and proven processes while speeding up delivery. The agent platform approach serves both camps, but particularly the latter. Organizations can deploy agents where they add clear value while preserving the integrity of established, deterministic workflows.

The rise of the enterprise architect and the generalist developer

As AI accelerates code generation, bottlenecks in software delivery are dissolving. In its place is a premium on systems thinking. This is the ability to understand the broader enterprise architecture, decompose complex business problems, and reason about how AI integrates with existing infrastructure. Kavanagh pointed to enterprise architects specifically as the professionals best positioned to capitalize on this moment.

“We’re entering a very interesting age of the generalist,” he explained. “The better you know your enterprise architecture and your business architecture and how those things align, the better off you’re going to be. ”

“The result is faster delivery with fewer interruptions and fewer bugs,” Kavanaugh said. “You can focus on the non-repetitive tasks. It’s a benefit to the developer, to the business, and to the whole IT organization.”

Advertisement

Catch the entire webinar here.


Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. For more information, contact sales@venturebeat.com.

Source link

Advertisement
Continue Reading

Tech

Blossom Health raises $20 million to put AI copilots alongside psychiatrists

Published

on

Blossom Health, a New York-based telepsychiatry startup founded in 2024, has raised $20 million in combined seed and Series A funding to scale an AI-powered platform that pairs psychiatrists with clinical copilots and automated administrative support. The round was led by Headline, whose co-founder and managing partner Mathias Schilling is joining the company’s board. Village Global and TA Ventures returned from earlier rounds, with Operator Partners and Correlation Ventures joining as new institutional backers alongside angel investors including founders from General Catalyst, Flatiron Health, Sword Health, and Zip.

The company, founded by CEO John Zhao, is built around a specific premise: that the bottleneck in psychiatric care is not a shortage of clinical knowledge but a shortage of time. Psychiatrists in the United States spend roughly half their working hours on non-clinical tasks, including documentation, billing, insurance authorisation, and scheduling. Blossom’s platform automates much of this through a network of AI agents that handle billing, reception, care coordination, and medical scribing, while a separate set of clinical copilots assist with symptom evaluation, diagnosis refinement, and medication selection during patient encounters.

The scale of the problem

The psychiatric workforce shortage in the United States is severe and worsening. More than 122 million Americans live in federally designated mental health professional shortage areas, according to the Health Resources and Services Administration. The national psychiatrist-to-population ratio stands at one provider for every 5,058 residents. Roughly 60 per cent of practising psychiatrists are 55 or older, meaning a significant portion of the existing workforce will retire within the next decade. Wait times for an initial psychiatric appointment range from three weeks to six months depending on location, and in many rural counties there are no psychiatrists at all.

This gap has created a market. US digital health startups raised $14.2 billion in 2025, the highest total since 2022, with AI-powered companies accounting for 54 per cent of that funding. Within mental health specifically, Talkiatry, an in-network telepsychiatry platform, raised $210 million in February 2026. Spring Health, which uses AI for personalised treatment recommendations, is valued at $3.3 billion. Ambient clinical scribes, the category of AI that automatically generates notes from patient conversations, produced $600 million in revenue last year alone.

Advertisement

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

Blossom is small by comparison. The company says its tools are used by hundreds of clinicians treating more than 10,000 patients across multiple US states. Most patients are seen within 48 hours, with many receiving same-day appointments. Blossom accepts all major commercial insurers, including Optum UnitedHealthcare, Aetna, Cigna Evernorth, and Blue Cross Blue Shield, with average copays of around $22.

Copilot, not replacement

The “copilot” framing is deliberate and important. Blossom is not building a therapy chatbot. Its AI tools sit alongside licensed psychiatrists during clinical encounters, surfacing relevant information, helping evaluate symptoms against diagnostic criteria, and suggesting medication adjustments based on the patient’s history and current presentation. The psychiatrist retains clinical authority over every decision.

Advertisement

Between appointments, the platform uses AI agents to maintain contact with patients through text-based check-ins on sleep, mood, medication adherence, and other indicators. Fortune reported that in the case of postpartum depression, for example, the system follows up with conversational prompts that surface warning signs and prepare information for clinicians ahead of the next visit. This approach converts what has traditionally been episodic care, where a patient sees a psychiatrist for 15 minutes every few months and is otherwise unsupported, into something closer to continuous monitoring.

The clinical claims are plausible but early. Blossom says it has demonstrated the ability to stabilise mental health conditions and prevent progression toward more intensive care, but the company has not published peer-reviewed clinical evidence. At 10,000 patients, the dataset is meaningful for a company this young but far too small to draw population-level conclusions about clinical efficacy.

The Cerebral cautionary tale

Any startup operating at the intersection of AI, telepsychiatry, and controlled substance prescribing inherits the reputational burden of what came before. Cerebral, the telemental health company that raised $300 million at a $4.8 billion valuation in 2022, became the subject of a Department of Justice investigation into its prescribing practices for controlled substances and paid a $7 million settlement to the Federal Trade Commission over allegations of misleading cancellation policies and data sharing. The company’s rapid growth, which prioritised patient volume over clinical rigour, damaged trust across the sector.

Blossom’s architecture is different in important ways. It works through licensed psychiatrists rather than nurse practitioners prescribing independently, and its AI tools are positioned as decision support rather than decision-makers. But the fundamental tension remains: scaling psychiatric care through technology requires maintaining clinical quality at volumes that a traditional practice model was never designed to handle. The AI copilot must be good enough to genuinely assist clinicians without introducing errors that a time-pressed psychiatrist might not catch, particularly in medication selection, where psychiatric pharmacology is notoriously complex and highly individual.

Advertisement

The $20 million will fund expansion into additional US states, new insurance partnerships, clinician recruitment, and continued research and development. For a company founded less than two years ago, treating over 10,000 patients with in-network insurance coverage is a notable operational achievement. Whether the clinical copilot meaningfully improves outcomes, or simply makes it faster to deliver care at the same quality, is the question the next round of funding will need to answer.

Source link

Advertisement
Continue Reading

Tech

Engineering courses for Ireland’s students and professionals

Published

on

Anyone looking for a new opportunity in the engineering space should consider one of the following courses as part of their upskilling process.

Engineering focus banner

While it is an innovative, exciting and dynamic sector, the STEM space and the careers that exist within it demand consistent and up-to-date training in order to ensure professionals are operating at their peak. Upskilling is an essential element of engineering careers and with that in mind, the following courses could be an ideal way for you to stay in the know and ahead of the game. 

Coursera

For those looking for an introductory course to be undertaken flexibly, IBM via Coursera is running a free Introduction to Software Engineering programme. It consists of six modules, is aimed at beginners and can be managed at the learners’ own pace. It also comes with a certificate upon completion of the course.

Also available via Coursera, with free online enrollment, is a 12-week introduction to Systems Engineering Specialisation, offered by the University of Colorado, Boulder. The beginner-level programme can be accessed in eight different languages and aims to teach the fundamentals, methods, practices and processes of industry-standard systems engineering.

Advertisement

Coursera also offers a range of courses designed for more advanced learners. For example, a Microsoft AI & ML Engineering Professional Certificate. The free six-month programme aims to prepare students and professionals for a career in artificial intelligence and machine learning. Coursera itself also has a free four-week Deep Learning Engineering Specialisation course for advanced students. 

Further Education and PLC

For students based in Louth, looking to earn their level five qualifications, there is a Further Education and PLC, level five QQI course in Engineering Technology. The programme is said to equip learners with the skills and knowledge needed to gain employment, access apprenticeship programmes and progress to further study in universities and institutes of technology. Successful graduates will have the opportunity to find employment in areas such as engineering machine operations, civil engineering, electrical engineering and advanced manufacturing, alongside others. 

In Dunboyne, Meath there is a year-long level five Engineering Technology course open to prospective students looking towards future education. It is a pre-university engineering course designed to to prepare the student for work in the field of engineering via entry into a third-level institution. Students who graduate from this course can pursue further study at degree level and will be well placed to gain apprenticeships in the many different engineering sectors through Generation Apprenticeship Ireland. There are a range of level five opportunities available nationwide, so be sure to find one that is convenient. 

South East Technological University

South East Technological University (SETU) has dozens of opportunities for engineering students and professionals, to fit a range of lifestyles and ambitions. Courses for those looking for their bachelor’s degree include standard four-year programmes in areas such as agricultural systems engineering, electronic engineering, aerospace engineering and automation engineering, among others. 

Advertisement

For more established students, there are also several one-year master’s degrees, such as the Master of Science in Engineering Research and Innovation and Master of Science in Sustainable Energy Engineering. Different courses will have unique requirements, commitments and prices, so make sure to read up on your chosen course first. 

Udemy

Educational platform Udemy has a free Engineering Mechanics Fundamentals to Proficiency course, which can be undertaken at the learners convenience online. The programme is aimed at beginners, covers a range of topics such as foundational mechanics and principles and will require an understanding of basic maths and physics. Udemy has dozens of free and paid options, to suit a variety of budgets and lifestyles, including, The Complete Full Stack AI Engineering Bootcamp, Site Reliability Engineering and The Complete Mechanical Engineering Course, which claims to offer 12 courses in one. 

Whether you are a complete novice, an enthusiast, a graduate or an established professional, there is really no incorrect way to engage with learning, provided you have a clear idea of what it is you hope to achieve from the experience. So make sure to do your research, identify your weaknesses and shop around for the course or learning materials that match your ambitions and available resources. 

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Advertisement

Source link

Continue Reading

Tech

Vizio TVs Now Require Walmart Accounts For Smart Features

Published

on

An anonymous reader quotes a report from Ars Technica: Prospective Vizio TV buyers should know there’s a good chance the set won’t work properly without a Walmart account. In an attempt to better serve advertisers, Walmart, which bought Vizio in December 2024, announced this week that select newly purchased Vizio TVs now require a Walmart account for setup and accessing smart TV features. Since 2024, Vizio TVs have required a Vizio account, which a Vizio OS website says is necessary for accessing “exclusive offers, subscription management, and tailored support.” Accounts are also central to Vizio’s business, which is largely driven by ads and tracking tied to its OS.

A Walmart spokesperson confirmed to Ars Technica that Walmart accounts will be mandatory on “select new Vizio OS TVs” for owners to complete onboarding and to use smart TV features. The representative added: “Customers who already have an existing Vizio account are being given the option to merge their Vizio account with their Walmart account. Customers with an existing Vizio account can opt out by deleting their Vizio account.” The representative wouldn’t confirm which TV models are affected. Walmart’s representative said the Walmart account integration is “designed to respect consumer choice and privacy, with data used in aggregated, permissioned, and compliant ways” but didn’t specify how.

Source link

Continue Reading

Tech

Washington state needs a ‘coherent’ story to compete in AI, leaders agree

Published

on

The Washington Technology Industry Association held its Tech in Focus roundtable on March 25, 2026, in Seattle. Credit: Ken Yeung

Washington state may have everything it needs to become a global AI hub. The problem is, it hasn’t figured out how to say so, and its political and tech leaders agree it’s time they got to work on it. 

On Wednesday, the Washington Technology Industry Association (WTIA) convened a roundtable of civic and industry leaders from throughout the Seattle region to ask a pointed question: What will it actually take for Washington state to stop playing catch-up with Silicon Valley and start leading?

At the center of the debate was the nonprofit’s latest white paper, “Seattle’s AI Advantage: The Path to Global Leadership.

In it, the author and futurist Alex Lightman argues the Emerald City holds six distinct advantages over rival tech hubs: abundance of clean energy, a backyard full of hyperscalers like Microsoft and Amazon, an acceptance of using AI to continuously improve AI and software, access to quantum computing, the ability to run large-scale simulations cheaply, and a growing foothold in space technology.

These assets, he contends, are what position Seattle to become a top-five U.S. city economically, comparable to a G7 economy with a $1 trillion GDP.

Advertisement

Yet while WTIA’s white paper largely shows that the city has incredible potential, the lobbying group emphasizes that it is a roadmap. The real challenge is to figure out what happens next. Once the talking is done, who’s going to organize the effort to transform the state? 

“I think one of the most important things we can do is start telling this story,” said Randa Minkarah, WTIA chief operations executive, referring to Washington’s need to establish itself as a leading, responsible AI and advanced technology region. “How do we get that out there that changes people’s point of view?”

Once that narrative takes hold, it can create momentum—”a storytelling flywheel” that spreads best practices and lessons across communities and organizations, Minkarah added.

Washington’s struggle to tell a coherent AI story isn’t caused by a single issue, but rather by a host of issues. Rachel Smith, president of the Washington Roundtable, pointed to a three-way misalignment between federal priorities and dollars, state priorities and dollars, and what is actually happening on the ground in communities.

Advertisement

“When those things are all misaligned, it feels like we spend a whole lot of money and we don’t get a whole lot out of it,” she said.

Smith called for a broader strategy focused on economic competitiveness and tax reform. This is a topic of debate after state lawmakers approved a new income tax on high earners this month. One investor in the audience underscored the issue, noting that some of the people writing checks in Washington’s tech ecosystem have moved their residences out of state.

Beau Perschbacher, senior policy advisor for Governor Bob Ferguson, participated in the WTIA roundtable discussion on how to make Washington a global AI state. Credit: Ken Yeung

There’s also the failure to make AI’s benefits accessible to everyday Washingtonians, as indigenous communities and local residents feel excluded. And compounding the issue is the lack of strategic alignment, as Washington has pared back its economic development strategy. That’s not what community leaders want—they want Olympia to take the lead. 

“That is a place where the state having a direction on the AI industry, where we want to go, would be super helpful,” Canedo remarked. Beau Perschbacher, Governor Bob Ferguson’s Senior Policy Advisor for Economic Development, didn’t disagree.

So what actually needs to happen? 

Advertisement

Panelists didn’t hold back when asked what Washington’s leaders must do in the next 24 months: Joe Nguyen, a former Washington State senator and CEO of the Seattle Metropolitan Chamber of Commerce, wants more risk-takers—businesses willing to be first movers in adopting AI within their industries and then evangelize what’s possible.

Jesse Canedo, chief economic development officer for the City of Bellevue, hopes operators can execute on the white paper’s vision. 

“Seattle as a region does a lot of great visioning,” he said. “It needs a lot of operationalizing of the big, bold ideas…Housing, people, and energy are the three big things that we can operationalize very quickly out of this vision.”

Not everyone agreed on the path forward. 

Advertisement

Alvin Graylin, a fellow at Stanford’s Institute for Human-Centered Artificial Intelligence, argued that Washington should position itself as a global hub for open-source AI rather than following Silicon Valley’s closed-model, big-spending approach. 

He pointed to Chinese labs producing near-equivalent models at a fraction of the cost, and said Washington could tap into millions of open-source developers worldwide rather than competing for a few thousand elite researchers at big labs.

Futurist Alex Lightman discusses his WTIA-commissioned whitepaper on Seattle’s AI advantage. Credit: Ken Yeung

Lightman, the white paper’s author, was skeptical. He noted that Microsoft made Netscape’s browser irrelevant by giving its own browser away, then made trillions selling everything around it. Open source has a ceiling, he argued, and it wouldn’t get Seattle to a trillion-dollar economy.

Separately, Perschbacher wants more federal funding to come to the state, and to improve community outreach to bring more people along as partners.

Can these leaders take all of their ideas and turn them into action? At the very least, the WTIA secured two pledges: The Washington Roundtable and the Seattle Metro Chamber both said they would work with the Governor’s office to shape a statewide economic development strategy, and Perschbacher committed to leading a federal funding working group.

Advertisement

Others joining the conversation included Alicia Teel, deputy director of Seattle’s Office of Economic Development. In addition to Minkarah, representing WTIA were Vice President of Innovation and Entrepreneurship Nick Ellingson, Chair of the Advanced Technologies Cluster Arry Yu, and Director of Industry and Community Relations Terrance Stevenson

Source link

Continue Reading

Tech

Next-gen AI breakthrough promises chatbots that can read the room better

Published

on

Have you ever asked a chatbot something and felt like it completely missed your point? You say something with a bit of nuance, and the AI misses the subtlety entirely. That is exactly the problem researchers are trying to solve.

Even though the emotional connection with AI can feel deeper than human conversation for many users, most AI systems today still treat a sentence as a single block of sentiment. If you mix praise and criticism, the nuance often gets lost.

The research, by Zhifeng Yuan and Jin Yuan, introduces a model that can break down a sentence and understand how you feel about each part, instead of generalizing everything into one response.

How this system helps AI read your intent better

Think about a sentence like, “The food was great, but the service was terrible.” A typical AI chatbot might struggle because the sentence has both positive and negative emotions.

Advertisement

The proposed model looks at each part of the sentence separately and connects each emotion to the right subject. It relies on an ‘emotional keywords attention network’ to do that.

In simple terms, it teaches AI to focus on words that carry strong emotions, such as “great” or “terrible.” These words guide the system toward understanding what matters most in the sentence.

The model then links those emotional cues to a specific aspect. It learns that “great” applies to food, while “terrible” applies to service. This process, known as aspect-level sentiment analysis, makes responses far more precise.

It also uses attention mechanisms to understand context, so it does not rely on keywords alone. It can figure out how different parts of a sentence connect. Researchers say this method performs better than existing models on standard benchmarks.

This approach can make AI chatbots feel more human

If adopted widely, this could change how AI responds in real-world situations. Chatbots could handle nuanced feedback more effectively instead of defaulting to generic replies. Customer support systems could pinpoint exactly what went wrong and respond with greater accuracy.

While concerns grow around AI chatbots mirroring human personality traits a little too well, one thing is clear. AI is here to stay, and if it is going to be part of everyday conversations, it needs to get better at reading the room.

Advertisement

Source link

Continue Reading

Trending

Copyright © 2025