Connect with us
DAPA Banner

Tech

‘A Rigged and Dangerous Product’: The Wildest Week for Prediction Markets Yet

Published

on

Kalshi CEO Tarek Mansour posted a video on Wednesday of six men decked out in business casual doing push-ups on the sidewalk. “This is how Kalshi Q1 board meeting ended,” he wrote on X. The board members are laughing and smiling in the video after their impromptu cardio session, and the mood is jubilant. The next day, it became clear that the team had ample reason to celebrate: Kalshi had just raised $1 billion at a $22 billion valuation, making the company worth on paper roughly double what it was only a few months ago.

The funding round represented a bright spot during one of the most turbulent weeks for the prediction market industry yet. In just the past five days, Nevada temporarily banned Kalshi by issuing a temporary restraining order and Arizona filed criminal charges accusing it of running an illegal gambling business; an Israeli reporter said that he received an avalanche of threats from Polymarket traders furious about how a story he wrote impacted their wagers; Polymarket scored a major deal with Major League Baseball, further entrenching itself in the world of professional sports; and US Senators introduced legislation to ban specific types of markets offered by the industry, including any involving “government actions, terrorism, war, assassination, and events where an individual knows or controls the outcome.” It is the latest in a series of bills intended to place guardrails around the prediction industry.

Senator Chris Murphy, a cosponsor of the bill and one of the industry’s most outspoken critics, said in an interview with WIRED that prediction markets are “a rigged and dangerous product,” and represent “a brand-new source of mind-bending corruption.”

“Kalshi already bans insider trading and markets directly tied to death and war,” says Kalshi spokesperson Elisabeth Diana. “As a US-based exchange, we support regulators and policymakers from both sides of the aisle in their efforts to keep these markets safe and responsible in America.” Polymarket did not return requests for comment.

Advertisement

Existing law gives the Commodity Futures Trading Commission, the agency that oversees prediction markets, the authority to ban offerings related to assassination, war, terrorism, and other subjects deemed contrary to the public interest. Some prediction markets already stay away from these categories. But not all of their users understand where exactly the lines are drawn, which created a messy situation when some assumed that a market on the fate of Iran’s supreme leader would result in a payout if he “left office” by getting killed.

Meanwhile, Polymarket, which largely operates outside of the United States, offers plenty of war markets—but legislation is unlikely to impact these offerings. The platform is currently offering a market on whether Israeli Prime Minister Benjamin Netanyahu will be “out” by certain dates; someone recently wagered $177,000 that he would be out by March 31. Polymarket would likely resolve the market to “yes” and allow its bettors to profit if Netanyahu dies, just as it did when Khamenei was killed.

One of the reasons Senator Murphy is so passionate about prediction markets is because he sees them as vectors for insider trading. The Israeli government, for example, has charged two of its citizens with leaking classified information by placing Polymarket bets tied to the war in Iran. The Connecticut lawmaker suspects that other trades related to the conflict may have been carried out by members of Trump’s inner circle who have advanced knowledge about military operations. “It’s bone chilling to think that there are staffers inside the situation room that are pushing the United States into war, not because it’s good for our security, but because they’re going to make $100,000 off it,” he says.

Source link

Advertisement
Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Tech Moves: Carbon Robotics’ new CFO; Microsoft gaming GM goes to Netflix; Nordstrom gets VP of AI

Published

on

Kevan Kryslter, CFO of Carbon Robotics. (Carbon Robotics Photo)

Agtech company Carbon Robotics appointed Kevan Krysler as chief financial officer. The Seattle startup, known for zapping weeds with lasers, reports it has surpassed $100 million in annual revenue. Carbon has also been name-checked twice by U.S. Health and Human Services Secretary Robert F. Kennedy Jr. for its pesticide-free approach to weed control.

Krysler joins Carbon from Silicon Valley-based Pure Storage, where he also served as CFO.

Carbon CEO Paul Mikesell said in a statement that Krysler “really gels with our culture and brings public company financial and executive experience to round out our team. This is indicative of Carbon Robotics pushing forward and evolving our leadership to match our rapidly increasing maturity in the market.”

Founded in 2018, Carbon has raised $177 million to date and employs about 260 people. The company operates a manufacturing facility in Richland, Wash., and ranks No. 10 on the GeekWire 200, our list of the top privately held startups in the Pacific Northwest.

T-Mobile has promoted Allan Samson to chief marketing officer after nearly a decade with the telecom giant. In recent years he has led its broadband business scaling its 5G Home Internet nationally and worked to advance its fiber strategy and joint ventures.

“As CMO, Allan will bring the full power of our marketing organization into one connected performance marketing engine, aligning media, pricing, portfolio, product marketing, innovation and digital experience,” said Mike Katz, T-Mobile’s chief business and product officer, on LinkedIn.

Keith Dolliver. (LinkedIn Photo)

— Attorney Keith Dolliver has retired after more than three decades at Microsoft, where he worked on initiatives involving LinkedIn, GitHub, Activision, Mojang (Minecraft) and others. He departs as vice president, deputy general counsel and corporate secretary.

Dolliver thanked executive leadership, partners and legal colleagues, and the corporate legal group, which he had led.

Advertisement

“I will miss all of you and will be cheering you on as you continue to take this consequential company forward,” he said on LinkedIn. He also credited his family, “who made home a place of positive energy, treated me with patience and grace, and were always in my corner.”

Haiyan Zhang. (GeekWire Photo)

Haiyan Zhang is leaving Microsoft for Netflix, where she’ll take on a role in gaming. Zhang spent more than 13 years at Microsoft, holding positions across Microsoft Gaming, Microsoft Research and Xbox Studios, most recently as general manager and partner for Gaming AI.

“Reflecting back, I still remember stepping through the doors at 30 Great Pulteney Street on March 27, 2013, into a newly formed Xbox game studio in London,” Zhang said on LinkedIn. “I felt at once excitement, trepidation, and optimism. As I step into this next chapter, I find many of those same emotions returning as I look ahead.”

Zhang is also founder and CEO of Thriven Foundation Labs, a nonprofit promoting AI for social good. Her wide-ranging career includes roles at BBC and IDEO in the United Kingdom.

Graham Sheldon. (LinkedIn Photo)

Graham Sheldon, has resigned as chief product officer for AI automation giant UiPath after more than three years with the company. UiPath, which is based in New York City, has an office in Bellevue, Wash.

Sheldon was previously with Microsoft for more than 20 years, leaving the role of corporate VP of product for Microsoft Teams. Early in his tenure, Sheldon served as technical advisor to Satya Nadella when the now CEO was a senior vice president. Sheldon also led engineers working on Bing, ads, MSN, Cortana and other initiatives.

Advertisement

Sheldon didn’t disclose his next move on LinkedIn, but said he’d be tackling bucket list items including getting his commercial pilot license, running a marathon, cheering on his daughter’s select soccer team and building with OpenClaw.

— Seattle’s Redfin has promoted Ariel Dos Santos to chief product and design officer. Dos Santos has been with the real estate platform for nearly four years. His career has also included roles at Amazon, where he helped lead the launch of Just Walk Out Technology, and at Microsoft, where he oversaw social marketing.

Vinit Tople. (LinkedIn Photo)

Vinit Tople is now vice president of AI and developer platforms at Seattle’s Nordstrom. He previously spent more than 12 years at Amazon, most recently as head of product for Alexa, and more recently worked at JPMorgan Chase, helping lead adoption of AI agents.

“Nordstrom, often called a ‘century-old startup,’ has reinvented itself time and again over 125 years — evolving ahead of each new era of retail — and now it’s making a bold move to put AI at the center of its next chapter,” Tople said on LinkedIn.

Sanjay Parmar. (LinkedIn Photo)

Chronus named Sanjay Parmar as chief AI officer for the Seattle-based mentoring software platform. He joins from Degreed, where he was CTO of the San Francisco Bay Area company.

Chronus CEO Ankur Ahlowalia, who took the helm in January, praised Parmar’s background in enterprise SaaS and AI-powered workforce solutions, saying in a statement it will help would help the company “make life-changing mentorship accessible to everyone.”

Advertisement

— Law firm Dorsey & Whitney appointed Cyrus Ansari as a technology commerce partner at its Seattle office. He was previously with two other Seattle firms: Perkins Coie and Davis Wright Tremaine.

“For several years now, my work has centered on commercial deals for cloud, AI, gaming and other technology businesses,” Ansari said on LinkedIn. “That focus continues at Dorsey.”

— Seattle’s Richard Moulds — a self-described car restorer, advisor, mentor and investor — joined the supervisory board of QuiX Quantum, a Netherlands-based developer of photonic quantum computing systems. Moulds left his role as general manager with AWS last year and now serves as a strategic advisor for quantum startups QEDMA and Nu Quantum.

Source link

Advertisement
Continue Reading

Tech

Three ways AI is learning to understand the physical world

Published

on

Large language models are running into limits in domains that require an understanding of the physical world — from robotics to autonomous driving to manufacturing. That constraint is pushing investors toward world models, with AMI Labs raising a $1.03 billion seed round shortly after World Labs secured $1 billion.

Large language models (LLMs) excel at processing abstract knowledge through next-token prediction, but they fundamentally lack grounding in physical causality. They cannot reliably predict the physical consequences of real-world actions. 

AI researchers and thought leaders are increasingly vocal about these limitations as the industry tries to push AI out of web browsers and into physical spaces. In an interview with podcaster Dwarkesh Patel, Turing Award recipient Richard Sutton warned that LLMs just mimic what people say instead of modeling the world, which limits their capacity to learn from experience and adjust themselves to changes in the world.

This is why models based on LLMs, including vision-language models (VLMs), can show brittle behavior and break with very small changes to their inputs. 

Advertisement

Google DeepMind CEO Demis Hassabis echoed this sentiment in another interview, pointing out that today’s AI models suffer from “jagged intelligence.” They can solve complex math olympiads but fail at basic physics because they are missing critical capabilities regarding real-world dynamics. 

To solve this problem, researchers are shifting focus to building world models that act as internal simulators, allowing AI systems to safely test hypotheses before taking physical action. However, “world models” is an umbrella term that encompasses several distinct architectural approaches. 

That has produced three distinct architectural approaches, each with different tradeoffs.

JEPA: built for real-time

The first main approach focuses on learning latent representations instead of trying to predict the dynamics of the world at the pixel level. Endorsed by AMI Labs, this method is heavily based on the Joint Embedding Predictive Architecture (JEPA). 

Advertisement

JEPA models try to mimic how humans understand the world. When we observe the world, we do not memorize every single pixel or irrelevant detail in a scene. For example, if you watch a car driving down a street, you track its trajectory and speed; you do not calculate the exact reflection of light on every single leaf of the trees in the background. 

V-JEPA

V-JEPA architecture (source: Meta FAIR)

JEPA models reproduce this human cognitive shortcut. Instead of forcing the neural network to predict exactly what the next frame of a video will look like, the model learns a smaller set of abstract, or “latent,” features. It discards the irrelevant details and focuses entirely on the core rules of how elements in the scene interact. This makes the model robust against background noise and small changes that break other models.

This architecture is highly compute and memory efficient. By ignoring irrelevant details, it requires much fewer training examples and runs with significantly lower latency. These characteristics make it suitable for applications where efficiency and real-time inference are non-negotiable, such as robotics, self-driving cars, and high-stakes enterprise workflows. 

Advertisement

For example, AMI is partnering with healthcare company Nabla to use this architecture to simulate operational complexity and reduce cognitive load in fast-paced healthcare settings. 

Yann LeCun, a pioneer of the JEPA architecture and co-founder of AMI, explained that world models based on JEPA are designed to be “controllable in the sense that you can give them goals, and by construction, the only thing they can do is accomplish those goals” in an interview with Newsweek.

Gaussian splats: built for space

A second approach leans on generative models to build complete spatial environments from scratch. Adopted by companies like World Labs, this method takes an initial prompt (it could be an image or a textual description) and uses a generative model to create a 3D Gaussian splat. A Gaussian splat is a technique for representing 3D scenes using millions of tiny, mathematical particles that define geometry and lighting. Unlike flat video generation, these 3D representations can be imported directly into standard physics and 3D engines, such as Unreal Engine, where users and other AI agents can freely navigate and interact with them from any angle.

The primary benefit here is a drastic reduction in the time and one-time generation cost required to create complex interactive 3D environments. It addresses the exact problem outlined by World Labs founder Fei-Fei Li, who noted that LLMs are ultimately like “wordsmiths in the dark,” possessing flowery language but lacking spatial intelligence and physical experience. World Labs’ Marble model gives AI that missing spatial awareness. 

Advertisement

While this approach is not designed for split-second, real-time execution, it has massive potential for spatial computing, interactive entertainment, industrial design, and building static training environments for robotics. The enterprise value is evident in Autodesk’s heavy backing of World Labs to integrate these models into their industrial design applications.

End-to-end generation: built for scale

The third approach uses an end-to-end generative model to process prompts and user actions, continuously generating the scene, physical dynamics, and reactions on the fly. Rather than exporting a static 3D file to an external physics engine, the model itself acts as the engine. It ingests an initial prompt alongside a continuous stream of user actions, and it generates the subsequent frames of the environment in real-time, calculating physics, lighting, and object reactions natively. 

DeepMind’s Genie 3 and Nvidia’s Cosmos fall into this category. These models provide a highly simple interface for generating infinite interactive experiences and massive volumes of synthetic data. DeepMind demonstrated this natively with Genie 3, showcasing how the model maintains strict object permanence and consistent physics at 24 frames per second without relying on a separate memory module.

Advertisement

This approach translates directly into heavy-duty synthetic data factories. Nvidia Cosmos uses this architecture to scale synthetic data and physical AI reasoning, allowing autonomous vehicle and robotics developers to synthesize rare, dangerous edge-case conditions without the cost or risk of physical testing. Waymo (a fellow Alphabet subsidiary) built its world model on top of Genie 3, adapting it for training its self-driving cars.

The downside to this end-to-end generative method is the great compute cost required to continuously render physics and pixels simultaneously. Still, the investment is necessary to achieve the vision laid out by Hassabis, who argues that a deep, internal understanding of physical causality is required because current AI is missing critical capabilities to operate safely in the real world.

What comes next: hybrid architectures

LLMs will continue to serve as the reasoning and communication interface, but world models are positioning themselves as foundational infrastructure for physical and spatial data pipelines. As the underlying models mature, we are seeing the emergence of hybrid architectures that draw on the strengths of each approach. 

For example, cybersecurity startup DeepTempo recently developed LogLM, a model that integrates elements from LLMs and JEPA to detect anomalies and cyber threats from security and network logs. 

Advertisement

Source link

Continue Reading

Tech

Microsoft rolls back some of its Copilot AI bloat on Windows

Published

on

Microsoft announced on Friday a series of changes focused on improving the quality of its Windows 11 operating system, which notably includes dialing back the number of entry points to its AI assistant, Copilot.

The company said it will reduce Copilot AI integrations in some apps, starting with Photos, Widgets, Notepad, and its Snipping Tool.

Under the heading of “integrating AI where it’s most meaningful,” Pavan Davuluri, EVP of Windows and Devices, wrote on the company’s blog that Microsoft is becoming more intentional about “how and where Copilot integrates across Windows.” Its goal, he explained, is to focus on AI experiences that are “genuinely useful.”

This “less-is-more” approach to integrating AI into existing platforms may reflect the growing consumer pushback against AI bloat. While many people today understand AI to be a useful tool, there are also concerns around trust and safety. For instance, a Pew Research study published this month noted that half of U.S. adults are now more concerned than excited about AI as of June 2025, up from 37% in 2021.

Advertisement

This is not the first time Microsoft has rethought its Copilot integrations. Earlier this month, the news site Windows Central said the company’s plan to ship Copilot-branded AI features across Windows 11 had been quietly shelved. This, the site said, included some system-level integrations within the Settings app, File Explorer, and elsewhere.

Before this, Microsoft had delayed the launch of its AI-powered memory feature, Windows Recall for Copilot + PCs, for over a year as it tried to address users’ privacy concerns. The Recall feature launched last April, but security vulnerabilities are still being discovered.

It’s clear that user feedback is influencing Microsoft’s moves around AI on Windows. Davuluri wrote that he and his team have spent the past several months listening to the community about how they’d like to see Windows improved.

The Copilot rollback is just one of the changes being made.

Advertisement

The company said it’s also introducing the ability to move the taskbar to the top or sides of the screen, giving users more control over system updates, speeding up File Explorer, improving the Widgets experience, updating the Feedback Hub, and making it easier to navigate its Windows Insider Program — a community that offers feedback about Windows’ future.

Source link

Continue Reading

Tech

How Long Can A Quadcopter Drone Fly On Just Solar?

Published

on

The final second prototype flying. (Credit: Luke Maximo Bell, YouTube)
The final second prototype flying. (Credit: Luke Maximo Bell, YouTube)

The dream of fully powering everything from aircraft to cars on just the power generated from solar panels attached to the machine remains a tempting one, but always seems to require some serious engineering including putting the machine on a crash diet. The quadcopter that [Luke Maximo Bell] tried to fly off just solar power is a good case in point, as the first attempt crashed after three minutes and wrecked its solar panels. Now he’s back with a second attempt that ought to stay airborne for as long as the sun is shining.

Among the flaws with the first prototype were poor support for the very thin and fragile PV panels, requiring much better support on the carbon fiber frame of the drone. To support the very large solar array, the first drone’s arms were made to be very long, but this interfered with maneuvering, so the second version got trimmed down and the array raised above the frame. This saved 70 grams of weight from the shortened tubs, which could then be added to the new panel supports.

After an initial test flight resulted in a crash when the PV output dropped, the need for a small battery buffer was clear, so this was added, along with a reduction of the array to 4×7 panels to get the same 20V as the battery. The array also had to be reinforced, as the thin array was very wobbly in addition to making it impossible to fly with any significant wind.

The power circuit as implemented on the second prototype. (Credit: Luke Maximo Bell, YouTube)
The power circuit as implemented on the second prototype. (Credit: Luke Maximo Bell, YouTube)

During the subsequent five hours long test flight it was clear that the resulting PV-powered drone was at the limits of its performance, with even some mild cloud cover forcing the battery to provide backup power.

For the test location a tree-sheltered site far away from windy Cape Town was also selected to provide the best possible shot, as keeping position with this drone was very hard. With the low weight and the big surface area of the solar panel array catching any little bit of wind, the GPS-based position keeping was essential. Unfortunately a few hours into the test this feature failed.

Manual position keeping is definitely possible, but [Luke] had to constantly counteract the drone wanting to drift off somewhere else. Ultimately the test flight ended when it was still very much a sunny South African summer’s day, due to the current provided by the array no longer keeping up with the power demands of the motors.

Advertisement

What this perhaps demonstrates best is that if you want to use PV solar power for your flying drone – especially with a significant payload – it’s probably best to use it for recharging while idle, or to extend the battery life by an appreciable amount. That said, props to [Luke] for persevering and making it work in the end.

Source link

Advertisement
Continue Reading

Tech

Mac gaming is better than ever, and it still sucks

Published

on

The state of Mac gaming does Apple’s incredible chips and stunning displays a disservice. As it has always been, there’s little to suggest Apple knows how, or even wants, to fix it.

Silver MacBook with Apple logo sits on a gray desk mat, flanked by a black rectangular device on the left and glossy black VR headset on the right.
The M5 Max MacBook Pro should be the ultimate gaming laptop, but it isn’t

The new M5 Max MacBook Pro might be Apple’s fastest-ever Mac. It’s still a terrible buy for anyone serious about gaming.
In truth, the 16-inch MacBook Pro should be a beast of a gaming laptop. It has a glorious, huge, bright, and colorful display and a massive battery.
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Tech

What is DLSS 5? Nvidia’s controversial AI update explained

Published

on

Nvidia saw a busy few days during its global GTC event hosted in San Jose.

Alongside the launch of NemoClaw, Nvidia unveiled DLSS 5 which the company has hailed as its “most significant breakthrough in computer graphics” since real-time ray tracing. Although it might sound interesting, DLSS 5 has been met with significant backlash and criticism.

We explain everything you need to know about Nvidia DLSS 5, including what it really is and how it will affect video games. We’ll also explain why there’s been such backlash and why gamers and developers alike are not happy with DLSS 5.

What is Nvidia DLSS 5?

Before we dive into the specifics of DLSS 5, we’ll start by reminding you what DLSS actually is. Released back in 2018, DLSS (short for Deep Learning Super Sampling) is an AI-powered technology that upscales resolutions and generates new frames in video games to boost its overall performance. According to Nvidia, it has been integrated into over 750 games and has become the “gold standard for the industry”. 

Advertisement

DLSS has seen various improvements and upgrades over the years, with DLSS 5 being the latest update. 

Advertisement

Nvidia explains that DLSS 5 uses an AI model that is trained end-to-end to understand aspects of complex scenes, including characters, hair and fabric alongside environmental lighting conditions, all by analysing a single frame. DLSS 5 will then generate visually precise images while retaining the structure and semantics of the original scene – all in real time at up to 4K resolution, so you shouldn’t notice any interruptions during gameplay.

But what does this mean for game developers? Nvidia says that DLSS 5 provides developers with “detailed controls” so they can determine where and how enhancements are applied to maintain each game’s aesthetic. At the time of writing, DLSS 5 will be supported by the likes of Bethesda, Ubisoft, Warner Bros Games and more. 

When is DLSS 5 coming out?

At the time of writing, Nvidia has only stated that DLSS 5 will arrive “this fall” and is yet to provide a concrete release date.

Advertisement

We do know a handful of titles that will support DLSS 5, including AION 2, Assassin’s Creed Shadows, Black State, CINDER CITY, Delta Force, Hogwarts Legacy, Justice, NARAKA: BLADEPOINT, NTE: Neverness to Everness, Phantom Blade Zero, Resident Evil Requiem, Sea of Remnants, Starfield, The Elder Scrolls IV: Oblivion Remastered, Where Winds Meet and more.

Advertisement

Nvidia also provided several examples of DLSS 5 in the likes of Resident Evil Requiem, EA SPORTS FC, Starfield and Hogwarts Legacy throughout the GTC.

Advertisement

What is the DLSS 5 controversy?

You’ll likely have seen some of the online backlash surrounding Nvidia’s DLSS 5 announcement, with disgruntled gamers taking to the likes of X and Reddit to declare the technology as nothing more than “AI slop”. However, there is more to their frustrations than that.

Game developers and artists have voiced concerns that DLSS 5 not only seems to make games look worse, but that they’ll lose artistic control over the game’s design. In addition, gamers have explained that they prefer a more nuanced game design, rather than one that feels smooth and less thought through.

Many also took to the comments section on Nvidia’s YouTube announcement to voice their unhappiness at DLSS 5. Nvidia has attempted to put minds at ease, which can be seen through the pinned comment on the video. In this comment, Nvidia explains that “game developers have full, detailed artistic control over DLSS 5’s effects to ensure they maintain their game’s unique aesthetic.”

The comment continues by explaining DLSS 5 is “not a filter” and instead “inputs the game’s color and motion vectors for each frame into the model, anchoring the output in the source 3D content.”

Advertisement

Advertisement

Not only that, but in a recent interview with Tom’s Hardware, Jensen Huang stated that those voicing anger are simply “completely wrong”. Huang concludes that DLSS 5 is “very different than generative AI; it’s content-control generative AI” and that developers will still retain control of the game and can “fine-tune the generative AI” to make it match the style of the game. 

In fact, some video game powerhouses have voiced their praise for DLSS 5. For example, Todd Howard, Studio Head and Executive Producer at Bethesda Game Studios stated in Nvidia’s press release that “with DLSS 5, the artistic style and detail shine through without being held back by the traditional limits of real-time rendering.”

Advertisement

In addition, Charlie Guillemot, co-CEO of Vantage Studios said the way DLSS 5 “renders lighting, materials and characters changes what we can promise to players. On Assassin’s Creed Shadows, it’s letting us build the kind of worlds we’ve always wanted to.”

Source link

Advertisement
Continue Reading

Tech

‘There are many pathways to becoming an excellent engineer’

Published

on

Kevin O’Riordan explains what potential applicants need to know in advance of a career at Acuity.

In September of last year, US industrial technology company Acuity announced plans to create 100 jobs out of its Cork-based facility, which is its new Global Digital Centre of Excellence, located in the city centre. 

As March is when Silicon Republic often focuses on subjects aligned with all things engineering, now seemed like an ideal opportunity to check back in with Acuity and explore what prospective candidates should know if they have plans of developing a career in this field. 

Kevin O’Riordan, the vice-president of technology and site leader at Acuity, explained the organisation has already employed more than 30 engineers and leaders across cloud‑native development, embedded systems, UI engineering and DevOps, however, there are still plenty of opportunities open to jobseekers looking for a new role in Cork.

Advertisement

“We are actively hiring for a wide range of roles, including software engineers (at all levels), software QA and automation, DevOps engineers, architects and technical leads, data and AI engineers and product managers,” he said. “New roles are being added regularly as we scale toward more than 100 software engineering R&D positions over the next three years.”

So how can a potential candidate stand out, as they look towards a new professional opportunity?

Skills and thrills

O’Riordan said Acuity is looking for people who are passionate about technology, with a curious nature who have a penchant for solving real-world problems. Strong collaboration skills are critical and candidates who can demonstrate initiative and a genuine interest in shaping the technology landscape stand out.

He added: “Because our product ecosystem spans multiple technology layers, we hire for a broad range of skills. We’re seeking engineers for a variety of projects with experience in any of embedded, cloud-native or application and UI. Desirable skills, depending on the role, include Python, AI, Azure, embedded C/C++, C#, modern JavaScript and React.

Advertisement

“For early career applicants, qualifications matter, but demonstrated skills, practical experience and personal projects carry significant weight. There are many pathways to becoming an excellent engineer, and we value all of them.”

The might of Munster

With many of Ireland’s key organisations and working hubs located in the capital, O’Riordan noted that the decision to establish Acuity in Cork was a “deliberate, strategic choice”, as Cork offers a “thriving tech hub with the talent, community and momentum to support a highly networked culture”. 

He said: “Our investment reinforces Cork’s growing reputation as a hub for innovation, talent and digital advancement. From day one, we’ve been intentional about integrating into the community, partnering with local organisations, supporting charity initiatives and sourcing materials and suppliers locally where possible.

“We are also contributing to the region’s long‑term talent pipeline through internships, graduate hiring and deep partnerships with local universities. By bringing advanced R&D work to Cork, we’re strengthening the broader tech ecosystem and supporting the city’s continued economic momentum.”

Advertisement

He also noted, perhaps with students attending university in the region in mind, that university partnerships are crucial to shaping the next generation of engineers in Ireland. “Cork’s institutions offer world‑class education, research capabilities and a diverse, motivated talent pipeline.”

For anyone interested in engaging with the organisation via their educational institution, O’Riordan said: “Our work with UCC, MTU and partners like Tyndall National Institute includes internships, course engagement, guest sessions that give students real-world insight, and early‑stage research exploration. These partnerships strengthen both Acuity and the broader ecosystem while ensuring students gain meaningful exposure to industry.”

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Advertisement

Source link

Continue Reading

Tech

IEEE and Academia Are Creating Microcredential Programs

Published

on

The rapid ascent of artificial intelligence and semiconductor manufacturing has created a paradox: Industries are booming yet they face a critical shortage of skilled workers. Demand for data center technicians, fabrication facility workers, and similar positions is growing. There aren’t enough candidates with the right skill sets to fill the in-demand jobs.

Although those technical roles are essential, they don’t always require a four-year degree—which has paved the way for skills-based microcredentials. By partnering with higher education institutions and training providers, industry leaders are helping to design targeted skills programs that quickly turn learners into job-ready technical professionals.

The new standard for skills validation

Because microcredentials are relatively new, consistency is key. Through its credentialing program, IEEE serves as a bridge between academia and industry. Developed and managed by IEEE Educational Activities, the program offers standardized credentials in collaboration with training organizations and universities seeking to provide skills-based qualifications outside formal degree programs. IEEE, as the world’s largest technical professional organization, has more than 30 years of experience offering industry-relevant credentials and expertise in global standardization.

IEEE is setting the benchmark for skills-based microcredentials by establishing a framework that includes assessment methods, qualifications for instructors and assessors, and criteria for skill levels.

Advertisement

A recent collaboration with the University of Southern California, in Los Angeles, for example, developed microcredentials for USC’s semiconductor cleanroom program. USC heads the CA Dreams microelectronics innovation hub.

“The IEEE framework allows us to rapidly prototype training programs and adapt on the fly in a way that building new university courses—much less degree programs—won’t allow.” —Adam Stieg

IEEE worked with USC to create standardized skills assessments and associated microcredentials so that industry hiring managers can recognize the newly developed skills. The microcredentials help people with or without four-year degrees join the semiconductor industry as cleanroom technicians or as engineers with cleanroom experience.

IEEE also has partnered with the California NanoSystems Institute at the University of California, Los Angeles, to create skills-based microcredentials for its cleanroom protocol and safety program.

Advertisement

Best practices for designing microcredentials

Based on IEEE’s work designing microcredentials with USC, UCLA, and other leading academic institutions, three best practices have emerged.

1. Align with industry needs before design.

Collaborate with industry prior to starting the design process. There isn’t a one-size-fits-all approach. Workforce needs vary based on industry sector, company size, and geography. Higher education institutions and training providers build relationships with companies and industry groups to create effective microcredential programs and methods of assessment.

2. Build for flexibility.

Traditional academic cycles can be slow, but technology moves fast. A flexible skills-based microcredentials framework allows programs to create or pivot as new breakthroughs occur.

“Setting up a credit-bearing course is not easy. And in a rapidly changing environment, you need to pivot quickly,” says Adam Stieg, research scientist and associate director at UCLA’s CNSI. “IEEE skills-based microcredentials are a flexible way to keep up our curriculum aligned with an evolving technology landscape.”

Advertisement

Stieg’s team worked with IEEE to build a framework to create microcredentials for its cleanroom protocol and safety program, ensuring it kept pace with the industry’s evolution.

“The IEEE framework allows us to rapidly prototype training programs and adapt on the fly,” he says, “in a way that building new university courses—much less degree programs—won’t allow.”

3. Implement a continuous-feedback loop.

Many of the technical roles companies are looking to fill in emerging fields such as AI, cybersecurity, and semiconductors are still being developed or are quickly evolving. The rapidly changing landscape requires continual communications and feedback among higher education, training providers, and industry.

“We struggle to have feedback loops through the education system to the industry and back again,” says Matt Francis, president and CEO of Ozark Integrated Circuits, in Fayetteville, Ark. Francis, who has served as IEEE Region 5 director, is an IEEE volunteer who supports workforce development for the semiconductor industry.

Advertisement

Creating consistent feedback loops is critical for generating consensus on the skills sets needed for microcredential programs, experts say, and it allows providers to update assessments as new tools and safety protocols enter the workplace.

“If we start thinking about having training frameworks used within companies that are essentially on some sort of standard and align with a microcredential, we can start to build consensus,” Francis says.

Getting started

Through its credentialing program, IEEE is helping higher education and industry work together to bridge the technical workforce skills gap. Contact its team to learn how IEEE skills-based microcredentials can help you fill your workforce pipeline.

From Your Site Articles

Advertisement

Related Articles Around the Web

Source link

Continue Reading

Tech

RAM crisis: Micron CEO forecasts spending increase to meet demands

Published

on


  • Micron CEO says company is unable to meet current demand
  • DRAM production is being prioritized for AI and datacenters
  • Consumers are reeling for the cheap RAM of yesteryear

Micron Technology Inc. CEO Sanjay Mehrotra has said that the company “are only able to supply, for our key customers in the midterm, about 50% to two-thirds of their requirements.”

Mehrotra’s statement reflects the growing demand by datacenters for components related to AI compute that will likely worsen the supply of memory.

Advertisement

Source link

Continue Reading

Tech

Australia’s Teen Social Media Ban Is Just Training A Generation In The Art Of The Workaround

Published

on

from the the-kids-are-alright dept

We’ve been covering Australia’s under-16 social media ban since before it went into effect, first noting the confusion and obvious implementation problems as pretty much everyone realized it was a total mess, and then documenting how the ban was actively harming kids with disabilities by cutting them off from critical support communities.

None of this was even remotely surprising. Critics around the world warned about all of it. The government went ahead anyway because doing something tends to poll better than doing something that actually works, especially when the thing that works is harder to explain. And government officials insisted (incorrectly) that the only ones who were complaining were the big tech companies or their proxies.

Now, three months in, the data is starting to arrive, and it confirms what should have been obvious from the start. New data from parental monitoring company Qustodio, provided to Crikey, shows that the ban has barely moved the needle:

While TikTok, YouTube and Snapchat all saw a decrease in use by Australians aged 10-15, the majority of teens who had been using the social media platforms pre-ban remained on the services afterwards.

That’s according to a new snapshot of data provided to Crikey by parental monitoring company Qustodio, adding to early evidence that there’s widespread circumvention of the government’s flagship tech policy.

Advertisement

The usage drop was only marginally larger than the normal seasonal dip that happens every year. In other words, the “world-first” ban achieved roughly the same effect as summer ending. There was definitely a drop, but it’s just not a particularly big one:

For what it’s worth, others are reporting the same thing. The Courier Mail found that the majority of teens who were using these apps before the ban were still using them afterwards.

Defenders of the ban will usually say something along the lines of: “We had to do something. Children were at risk. Even if it’s imperfect, at least we tried.” That argument might hold some water if the ban merely failed — if it just didn’t work and left things roughly where they were before. A swing and a miss. You dust yourself off and try something else.

But that’s not what happened. The ban didn’t leave things where they were. It made things actively worse, through a mechanism that was entirely predictable.

The ban is basically a test of technical sophistication, rather than a test of vulnerability. The kids who can’t figure out how to get around it — or who don’t have friends or older siblings to help them — are the kids who are already isolated or lack the technical skills to bypass a block. Those are the kids with disabilities who lost their support communities, the ones we wrote about last month. Those are the kids in rural areas or difficult home situations who relied on these platforms for connection. The ban selected for vulnerability and filtered against resourcefulness.

Advertisement

That’s a hell of a result for a child safety measure.

Meanwhile, the vast majority of kids — the ones the ban was supposedly protecting — just learned to route around it. Rather than learning responsible usage and digital literacy, they learned that age verification systems are obstacles to be defeated… which, congratulations, is probably the single least useful lesson you could teach a teenager about their relationship with technology.

Actually, it’s worse: Australian adults now have a false sense of security — the comfortable belief that they’ve magically protected kids from the evils of the internet.

When you pass a ban and declare the problem solved, you eliminate the political pressure to do the things that would actually help. Why fund digital literacy programs when kids aren’t supposed to be on social media at all? Why push platforms to develop better age-appropriate tools and experiences when under-16s are “banned”? Why have conversations with kids about healthy usage of something they’re not supposed to be using?

Advertisement

The ban creates a fiction — kids are off social media — that every politician and regulator has an incentive to maintain, even though the data says the fiction is exactly that. Kids are still using these platforms. They’re just doing it without guidance or access to real safety tools, and with the realization that the adults in charge don’t actually understand how any of this works.

So you end up with the worst possible outcome: nearly universal continued usage combined with policy complacency and zero institutional incentive to teach kids how to use these platforms safely. Kids using social media without supervision or education, while the government pats itself on the back for a ban that exists only on paper.

This was all foreseeable. It was all foreseen. Critics said so publicly, repeatedly, before the law passed. And the Australian government did it anyway, because “ban the thing” is a satisfying political narrative, even when — especially when — it doesn’t work.

And now that it’s failed, rather than admit that the plan was bad and dangerous… they’re doubling down by blaming the tech companies:

Advertisement

An eSafety spokesperson said that social media platforms need to take “continuous action” to find underage users on their platforms, including those who’ve created new accounts.

“eSafety is aware of reports some under-16s continue to access social media accounts and is actively engaging with platforms and their age assurance providers to probe weaknesses and encourage continuous improvement of implementation and settings while continuing to monitor for any systemic failures that may amount to a breach of the law,” they said.

The spokesperson foreshadowed further announcements in the coming weeks, adding: “We will provide further updates on age restricted platforms’ progress in meeting their obligations when it is appropriate to do so but we must be careful to not compromise the regulatory process currently underway or prejudice any enforcement action we may undertake in future.”

The blame will keep flowing toward the platforms. The kids will keep routing around the ban. And the adults will keep congratulating themselves for solving a problem they made worse.

Filed Under: australia, esafety, esafety commissioner, kids, safety, social media, social media ban, teens

Advertisement

Source link

Continue Reading

Trending

Copyright © 2025