Connect with us
DAPA Banner
DAPA Coin
DAPA
COIN PAYMENT ASSET
PRIVACY · BLOCKDAG · HOMOMORPHIC ENCRYPTION · RUST
ElGamal Encrypted MINE DAPA
🚫 GENESIS SOLD OUT
DAPAPAY COMING

Tech

Kalshi suspended three political candidates from its platform for insider trading

Published

on

Prediction market Kalshi has taken action against three political candidates, alleging that each was engaged with insider trading of information about their campaigns. The company implemented new rules last month aimed at preventing politicians and athletes from placing bets on events they can control, and it said those guardrails helped to flag this trio of cases.

The three candidates are Mark Moran of Virginia, Matt Klein of Minnesota and Ezekiel Enriquez of Texas. Kalshi reached settlements with Klein and Enriquez, both of whom cooperated in the platform’s investigations. Each will face a fine of less than $1,000 and suspensions of up to five years. Moran’s case has resulted in a disciplinary action, with a five year suspension and a fine of more than $6,000. He posted on X about the situation and claimed this was essentially a stunt to see if he’d be caught and “to highlight how this company is destroying young men.”

Kalshi and other prediction markets have been the subject of several lawsuits by state attorneys general that are attempting to regulate the sector as gambling. Nevada, Arizona and New York have cases underway, but the state-level attempts are not looking promising. An appeals court ruled against New Jersey’s effort to govern this industry, and the US Commodity Futures Trading Commission has launched a lawsuit of its own in an effort to ensure it will be the only party to regulate prediction markets.

Source link

Advertisement
Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Why is Apple backing Android against the EU?

Published

on

The European Union wants Google to allow any AI company to use its services, and the company hates the idea. Apple agrees with Google.

Apple doesn’t seem to be listened to by the European Union when it complains about its own experiences trying to work within the Digital Markets Act (DMA). But since the EU has asked for responses to its proposals for Google to open up to rival AI firms, Apple has tried again.

“The DMs [draft measures] raise urgent and serious concerns,” said Apple in a submission to the EU, as seen by Reuters.

For instance, Apple is expressly concerned about the idea that any AI firm could in theory send emails or order food via Android, without Google’s or perhaps the user’s knowledge.

Advertisement

“If confirmed, they would create profound risks for user privacy, security, and safety as well as device integrity and performance,” continued Apple.

Apple doubtlessly has its own platforms in mind when it is now objecting to rival firms having full access to Android. But it also makes the point that the EU has specified AI firms in its proposals, and Apple points out how poor and error-strewn AI apps are.

“These risks are especially acute in the context of rapidly evolving AI systems whose capabilities, behaviours, and threat vectors remain unpredictable,” said Apple, “as we are now seeing time and again.”

Anyone can submit their opinion to the EU when there is an open call like this, and everyone who does is really looking to protect their own interests. So Apple is clearly concerned that it, too, may be forced to allow the same rival access in iOS.

Advertisement

However, Apple does also have the experience of what it has previously claimed to be “hundreds of thousands of engineering hours” in complying with the DMA. And as part of its new submission, questioned the EU’s technological expertise.

“The EC is redesigning an OS… it is substituting judgments made by Google’s engineers for its own judgment based on less than three months of work,” said Apple. “It is all the more dangerous given the only value that can be discerned from the [draft measures] guiding this work appears to be open and unfettered access.”

Separately, in May 2026, the EU concluded that its DMA has made a positive impact, thereby ignoring Apple’s lobbying for it to be revised.

What happens next

It’s not clear when Apple submitted its filing to the EU, but it was during the consultation period that ran from April 27, 2026, to May 13, 2026.

Advertisement

The European Commission states that it will “carefully assess” submissions from both Google and what it calls interested parties. It does say that there may be adjustments made to the proposed measures because of the submissions.

However, it also mandates that its final decision “must be adopted within six months” of the opening of the specification proceedings. In this case, that means July 27, 2026.

Source link

Advertisement
Continue Reading

Tech

FreeCAD 1.1 Tutorial, For Beginners Who Like Clear Instructions

Published

on

If you’ve been interested in FreeCAD but haven’t known where to start, here’s a wonderful video tutorial for FreeCAD 1.1 by [Deltahedra] aimed squarely at how to model a 3D part from scratch while also following best engineering practices for part design. It focuses on a concise and meaningful workflow that respects your time and doesn’t make assumptions about skill level. It even starts by taking a few moments to explain how to navigate the interface, a courtesy many will appreciate.

FreeCAD can do quite a lot, so a tutorial that focuses on a specific yet broadly-applicable task with a clear context is a great way to narrow the scope into something manageable, and be comprehensive without getting bogged down in minutiae. [Deltahedra] does this by exclusively using the part design workbench, demonstrating what to do to make a part step-by-step, and showing common mistakes that can happen and how to fix them if they occur. Beyond that, it’s left up to the curious hacker to delve for themselves into what else FreeCAD has to offer.

Since 1.1 is (at this writing) the latest stable release, one can also be confident that the tutorial will match the user interface and features one sees on their own screen. After all, it can be frustrating to attempt to follow a tutorial only to find out things are a few versions behind and nothing is where one expects it to be.

Advertisement

Best practices aren’t just fussy rules about how to do things, and [Deltahedra] demonstrates this by showing how certain procedures just plain make more sense when designing shapes. Our own Arya Voronova has also shared best practices for FreeCAD, so check that out for some added perspective. You’ll be wielding FreeCAD in confidence and comfort in no time.

Thanks for the tip, [Vik Olliver]!

Advertisement

Source link

Continue Reading

Tech

Testing Giant Fire Darts From The Mary Rose

Published

on

Fire arrow versus the recreated fire dart. (Credit: Tod's Workshop, YouTube)
Fire arrow versus the recreated fire dart. (Credit: Tod’s Workshop, YouTube)

The Mary Rose was a carrack in the English Tudor Navy of King Henry VIII  that fought in multiple battles during the 16th century before it was sunk in 1545. After its wreck was located in 1971 and raised in 1982 the ship and all the items contained within the partially preserved hull became the focus of intense study. Among these items are the weaponry found, including the canons, but also massive darts that seemed to have been designed for an incendiary payload. Recently [Tod’s Workshop] collaborated with others to test these presumed incendiary darts.

Although fire arrows have been around for a while, seeing what appears to be super-sized versions of these is somewhat unusual, but could make sense in taking out enemy ships of the time. The main questions are how you would even fire them, and how effective they would be. Were the darts thrown by hand from e.g. the crow’s nest, or fired from a canon?

The reproduction darts used are based on the recovered remnants of the original darts, with an incendiary mixture inside a pitch-covered cloth covering. This mixture would be ignited by wooden fuses after a set amount of time, at which point the resulting fire would be basically impossible to put out. Obviously, this also means that if you were to throw one of these darts, it can absolutely not fall onto your own ship.

First tested was throwing the dart by hand, which seems like it would clear the ship. Of course, the three recovered darts were found near a rather special canon that appeared to be both a miscast and angled upwards. Whether that canon was used for launching apparently somewhat experimental darts is hard to say, but it can be tested. Sadly, lacking a full-sized black powder canon a scale model dart was fired using compressed air.

From that scale test it’s clear that at full charge the dart would disintegrate due to the rapid acceleration, but a ‘soft’, or reduced, charge could work against nearby targets. Once the dart lodges itself into the enemy ship’s structure, it would definitely cause severe damage as further tests in the video demonstrate. Having a salvo of these fire darts fired at you from a nearby ship would definitely make for a pretty bad day.

Advertisement

Source link

Advertisement
Continue Reading

Tech

LinkedIn becomes the latest name on a 100,000-job tech layoff list

Published

on

Microsoft’s professional network becomes the latest name on a list that now includes Meta, Amazon, Oracle, and IBM, even as the same companies are guiding $725 billion of AI capital spending this year.


LinkedIn is cutting roughly 5% of its staff, the latest reduction at a Microsoft-owned business and the most recent entry in a year-long Big Tech contraction that has now displaced more than 100,000 workers across the sector.

Chief executive Dan Shapero, who took over from Ryan Roslansky in late April when Roslansky moved into a new AI role inside Microsoft, set out the cuts in a memo to employees, citing the need to operate “more profitably” and to reinvent how the company works with smaller, more agile teams. Bloomberg reported the memo on Wednesday.

LinkedIn employed roughly 17,500 staff at the start of 2026, implying a cut in the region of 900 to 1,000 roles.

Advertisement

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

The company has not confirmed an absolute number, but multiple outlets briefed by sources put the figure at about 875 jobs, with engineering, product, marketing, and the Global Business Organization carrying most of the impact.

The bigger number is the one that frames everything else. By 13 May, the global technology sector had announced more than 100,000 layoffs across some 250 separate events, an average of roughly 880 a day, according to industry trackers.

Advertisement

The TrueUp layoffs tracker had logged 286 events affecting 128,270 workers, the highest reading since the 2023 contraction.

The defining feature is the divergence between payroll and capital expenditure. Amazon, Microsoft, Alphabet, and Meta are collectively guiding to roughly $725 billion of capital spending in 2026, almost all of it directed at AI infrastructure, GPUs, and data centres.

That figure is up from $410 billion in 2025, and rising faster than at any point since the cloud build-out of the late 2010s. Headcount, meanwhile, is going the other direction at the same firms.

The biggest single tranche still ahead this week is Meta’s. The company will begin companywide layoffs on 20 May, cutting approximately 8,000 employees, or about 10% of its 78,865-person workforce, with further reductions planned for the second half of 2026.

Advertisement

Microsoft has taken a different shape. Rather than involuntary cuts, the company in April opened a voluntary-separation programme to around 8,750 US employees, roughly 7% of its domestic headcount, structured under a “Rule of 70” formula in which years of service plus age must total at least 70.

It is the first such programme in the company’s 51-year history. Final notifications went out on 7 May, with a 30-day decision window. LinkedIn’s cuts now layer on top of those Microsoft moves.

Amazon has been quieter but is on a larger absolute trajectory. The company confirmed in January that it was cutting 16,000 corporate roles, bringing total reductions since October 2025 to roughly 30,000, the largest workforce contraction in its history.

Chief executive Andy Jassy framed the cuts as a flattening of layers built up during the 2020-2022 hyper-growth phase, not a direct AI substitution.

Advertisement

The smaller players are following the same pattern at a different scale. Oracle has cut roughly 30,000 positions, around 18% of its global workforce. IBM, Salesforce, Cisco, and SAP have all confirmed cuts over the year, and defence-adjacent contractors tied to federal technology procurement have shed several thousand roles since the start of the year.

For LinkedIn, the framing is narrower. Shapero’s memo pointed to slower revenue growth and an organisational flattening rather than an AI substitution, and the cuts are part of a wider Microsoft-group rebalance that began with the April Rule-of-70 programme.

LinkedIn’s revenue still grew 12% year on year in the most recent quarter, which makes the cut a profitability call, not a top-line one.

Whether the AI-substitution reading holds across the rest of the sector will probably be settled by the second-half 2026 round of disclosures, particularly Meta’s.

Advertisement

Until then, the running 2026 total is the only honest summary of the labour story: more than 100,000 jobs out, $725 billion of capex going in, and a widening gap between where the money sits and where the people do.

Source link

Advertisement
Continue Reading

Tech

NAND contract prices surge over 600% since September 2025, DRAM up ~400%

Published

on


You’ve no doubt heard the short version of this story time and again: AI startups are gobbling up all of the memory that manufacturers can produce, leaving traditional electronics firms to fight for the remaining scraps. In all situations, that means heavily inflated prices and for some near the bottom…
Read Entire Article
Source link

Continue Reading

Tech

How Did Apollo Separate? | Hackaday

Published

on

If you’ve watched a Saturn V launch, you’ve probably seen how a large rocket will often jettison a stage on the way up. There are several reasons for this — there is no reason to haul an empty fuel container, for example. However, you can probably imagine how the separation works. You release something — probably explosive bolts — and gravity pulls the old stage away from you as you climb on the next stage’s engines. But what about on the way back? The command module drops the service module before reentry. [Apollo11Space] has a video explaining just how complicated that was to pull off. You can watch it below.

The main problem? The service module has almost everything you need: oxygen, a big engine, fuel, and electrical generation capability. If you’ve ever seen a real command module, they are tiny. Somehow, you need to get the command module prepared to be on its own for the amount of time it takes to land, and get the service module safely away.

In orbit, gravity isn’t a big help in pulling the two pieces apart. For that reason, the mission design called for a very specific orientation for the separation. There are a number of other details you might not have known about.

Advertisement

Landing Apollo 11 successfully depended on some spy tech. We imagine the separation of the LEM had some similar issues, although even the moon’s weak gravity would have helped.

Advertisement

Source link

Continue Reading

Tech

What Will Be Running Inside the New Googlebook Laptops? What We Know So Far

Published

on

Android and ChromeOS are merging into a single operating system that will debut in Google’s new laptop lineup, Googlebooks, announced during this week’s Android Show. With no official name yet, the merged operating system has been going by Aluminum OS, but that will likely change by the time it arrives on machines.

We’ve known for some time that Google’s mobile and cloud-based operating systems would be merging, but several questions still remain. Through a handful of leaks, we have a pretty good idea of what to expect. Here’s what we know.

What do we know about Aluminum OS?

Though it won’t be called Aluminum OS when it officially arrives, Google has remained tight-lipped about the name. And beyond what Google has shown us, we haven’t seen much of the operating system in action.

Advertisement

Previously, a now-private issue ticket gave us our first glimpse of the full Android desktop view. This short video shows two side-by-side windows replicating an issue. Hours before this week’s Android Show, the full setup experience of the OS was leaked in detail. 

The interface looks similar to Android’s existing desktop view, but the video also showed an extensions icon — something entirely new to the Android operating system outside of third-party web browsers.

We can also expect a lot from Aluminum OS in the way of artificial intelligence. Gemini is already at the heart of Google’s Pixel phones, and that’s exactly what we should see with its laptop lineup. 

How is this different from ChromeOS’s Android features?

Given that Chromebooks ship with the Google Play Store out of the box, you might wonder what the big deal is with Aluminum OS, which is fair. Unlike the Play Store on ChromeOS, the base layer of Aluminum is Android, offering native app support combined with a full desktop browsing experience from Chrome. 

Advertisement

In essence, Aluminum OS seems poised to be a more powerful and flexible version of Android. Given the billions of Android devices worldwide, the appeal of this new OS could be substantial. Having both your laptop and phone running the same operating system should create a far more integrated software experience across devices, with Gemini at the center.

Source link

Advertisement
Continue Reading

Tech

Google’s AI-enabled mouse pointer understands ‘this’ and ‘that’

Published

on

software

Right-clicking could go the way of the 3.5-inch floppy at the Chocolate Factory

Google doesn’t design mouse traps, so it’s trying to design a better mouse. 

Google DeepMind announced a research effort to transform the standard computer mouse cursor into a context-aware, AI-powered tool, marking what the company described as the first major rethinking of the cursor in more than 50 years.

Advertisement

The project by researchers Adrien Baranes and Rob Marchant integrated Google’s Gemini AI model with an experimental context-aware mouse pointer. In this way, the company said, the system can understand where a user clicks, what they are clicking on, and the likely intent behind the interaction.

Researchers said there is a persistent friction in how people currently interact with AI tools. Most AI assistants today live in a separate window, requiring users to copy, paste, or drag content into a chat interface before receiving help. The new approach aims to reverse that dynamic.

“We want the opposite: intuitive AI that meets users across all the tools they use, without interrupting their flow,” the researchers stated in the blog post.

The mouse pointer works alongside the computer’s microphone, allowing Gemini to listen as the user points. This lets users refer to features on the screen with object pronouns like “this” and “that.”

Advertisement

In a demonstration website, a user can hover a cursor over a crab and say “move this here,” and the system understands enough context to grab the crab and move it to where the cursor indicates. 

The first computer mouse, a one-button prototype with metal wheels for the x- and y-axis, was built out of wood in 1964 and was patented in 1970 by its inventors Doug Engelbart and Bill English, who worked at the Stanford Research Institute.

Engelbart foresaw a day when humans and computers would interact more easily and naturally, which he talked about during his 1997 acceptance speech for the Lemelson-MIT Prize.

“The computer technology, the digital capabilities, it’s affecting communications, displays, storage, computer processing. It’s affecting the way you can interface to things a lot more flexibly,” he said. “That’s going to be so pervasively high-impact in our society and our organizations that it’s more than anything we’ve had to cope with evolutionary wise.”

Advertisement

Maintain the flow

At Google, the team said it laid out four design principles guiding the project. The first, which the researchers called “Maintain the flow,” stated that AI capabilities should work across all applications rather than forcing users into separate AI-specific environments. Under this principle, a user could point at a PDF and request a summary, or hover over a statistics table and ask for a chart, all without leaving the current application.

The next, “Show and tell,” addressed the burden of prompt writing. The researchers stated that an AI-enabled pointer could capture visual and semantic context from the screen, reducing the need for users to write detailed text instructions to the model. 

They also developed the AI cursor based on how humans naturally communicate using short phrases and gestures like “this” and “that.” The researchers stated that the system would allow users to issue commands like “Fix this” or “Move that here” while the AI fills in the contextual gaps.

The fourth principle, “Turn pixels into actionable entities,” lets the pointer recognize structured objects within on-screen content. The researchers stated that this capability could turn a photo of a handwritten note into an interactive to-do list, or convert a paused video frame showing a restaurant into a booking link.

Advertisement

In the blog, the researchers said that Google DeepMind has already begun integrating the lessons learned into products. A feature called Magic Pointer will soon roll out on the forthcoming Googlebook laptop platform, which The Chocolate Factory introduced earlier this week. The company said the technology will also allow users of Gemini in Chrome to point at specific parts of a webpage and ask questions, rather than composing a full text prompt.

Experimental demos of the AI-enabled pointer are currently available through Google AI Studio, where users can test image-editing and map-based interactions using the point-and-speak approach.

The company said it plans to continue testing the concept across additional platforms, including Google Labs’ Disco.  ®

Source link

Advertisement
Continue Reading

Tech

Amazon puts Alexa inside the search bar as agentic commerce heats up

Published

on

The unified Alexa for Shopping assistant absorbs Rufus and arrives in the main search flow as Amazon sues to keep external AI agents like Perplexity’s Comet off its marketplace.


Amazon is moving its AI shopping assistant into the main search bar. Starting this week, US customers typing into the search field on Amazon.com or in the Amazon app will be routed through Alexa for Shopping, a unified version of the company’s Rufus chatbot and its Alexa+ assistant that returns conversational answers, product comparisons, up to a year of price history, and personalised shopping guides alongside the standard product listings.

The Rufus brand is being retired from the shopping interface. The chatbot, launched in 2024 and used by more than 300 million customers in 2025, is being folded into the Alexa for Shopping name across Amazon’s app, website, and Echo devices.

Amazon says the new assistant can also automate reordering of household staples, track prices, alert customers to new products in tracked categories, and build out shopping carts based on stated preferences.

Advertisement

It is available without a Prime membership, an Echo device, or the standalone Alexa app, and is free for any signed-in US account.

Advertisement

The structural change is that the AI now sits inside the default search flow rather than behind a separate icon. Rufus, in its original form, was accessible but optional.

Alexa for Shopping reframes the search box itself as a conversational interface, in the same way Google’s AI Overviews changed what happens after a query on Google.com.

Amazon’s own framing is that the move makes the assistant “agentic,” meaning able to complete multi-step tasks like comparison, cart construction, and reorder, on the customer’s behalf.

The competitive backdrop is what makes the placement significant. OpenAI launched Instant Checkout in September 2025 with Stripe and an open-source Agentic Commerce Protocol that lets ChatGPT complete purchases inside its own interface.

Advertisement

Google is building Buy for Me into Gemini and runs its A2A agent-to-agent protocol with 150-plus supporting organisations. Perplexity’s Comet browser has had a Buy with Pro feature since late 2024, with checkout via PayPal across 5,000-plus merchants.

In China, Alibaba integrated its Qwen AI directly into Taobao for end-to-end agentic shopping last quarter. Each of those routes the buy flow through someone other than Amazon.

The Perplexity case sharpens the picture. Amazon sued the AI search company in November, alleging its Comet shopping agent was accessing Amazon.com in violation of the site’s terms and creating problems for ad-impression measurement.

A federal judge granted Amazon a preliminary injunction in March; Perplexity took the case to the Ninth Circuit, which has temporarily paused parts of the order while the appeal is heard.

Advertisement

The legal argument is over agent access, but the commercial argument is over who captures the high-intent search query at the top of the funnel.

That is what Alexa for Shopping is designed to defend. Amazon’s $56 billion advertising business, all of it built around sponsored placements inside search and product pages, depends on Amazon being the first and last surface a buyer touches.

If a third-party AI agent does the comparison and the click on a customer’s behalf, the sponsored slot loses its target.

The internal answer is to make Amazon’s own AI assistant the most fluent shopper on Amazon.com, with access to the price history, recommendation graph, and account-level purchase data that an external agent does not have.

Advertisement

Whether it works as a product is a separate question. Amazon has tried to make Alexa the front door to its shopping business for the better part of a decade, with mixed results.

Voice shopping never reached the share the company once projected, and the original Rufus chatbot, while widely used, has been described in trade reporting as more useful for product research than for closing transactions.

The unification with Alexa+ is also a tacit acknowledgement that running two AI assistants, one for the home and one for the cart, was confusing to customers and expensive to maintain.

The rollout this week is US-only, with international expansion timed to Alexa+’s broader availability, which Amazon has been pushing through 2026. 

Advertisement

Source link

Continue Reading

Tech

All the Android phones getting the new Gemini Intelligence

Published

on

Google has announced plenty of new features and upcoming products, but one of the most intriguing is undoubtedly Gemini Intelligence.

Gemini Intelligence is promised to bring the “best of Gemini” to compatible devices, by integrating premium hardware and software to help users in everyday life. For an overview of what the new system specifically includes, visit our Gemini Intelligence explainer.

Google has revealed that Gemini Intelligence features will roll out in waves from the summer, but which phones are expected to see the upgrade?

We’ve rounded up the Android phones that should see Gemini Intelligence and, where possible, we detail when handsets are likely to receive the upgrade.

Advertisement

For more on Google’s recent Android 17 announcements, make sure you visit our guides on the new Pause Point feature and the Emoji revamp. Finally, the best Android phones list reveals our current favourite handsets on the market.


Advertisement

Which Android phones are expected to see Gemini Intelligence?

At the time of writing, Google hasn’t revealed the exact dates for when we can expect the Android 17 update to launch. Instead, the company has just stated it will begin the roll out this summer.

Android 17 Gemini IntelligenceAndroid 17 Gemini Intelligence
Gemini Intelligence. Image Credit (Google)

We should also disclaim that, at the time of writing, Google hasn’t officially announced the specific phones that will support Gemini Intelligence. Instead, Google states that the first Android devices to see Gemini Intelligence will be the latest Samsung Galaxy and Google Pixel phones. With this in mind, we can assume the entire Pixel 10 series, including the Pixel 10 Pro Fold and potentially even the affordable Pixel 10a, will see the feature. 

Pixel 10a in handPixel 10a in hand
Pixel 10a in hand. Image Credit (Trusted Reviews)

We don’t currently know if the Pixel 9 series will benefit from Gemini Intelligence, so we’ll have to wait and see. 

Similarly, we can reasonably expect that the Galaxy S26 series, including the Galaxy S26 Ultra, will sport Gemini Intelligence. Plus, considering the upcoming Z Fold 8 and Z Flip 8 are rumoured to launch sometime in the summer, it’s likely that the foldable may also use Gemini Intelligence – though that’s speculation on our part. 

Advertisement

Advertisement

Which other Android devices will see Gemini Intelligence?

Google has teased that other Android devices will include Gemini Intelligence features. Such devices will include “your watch, car, glasses and laptops.” WearOS will see features like Create my Widget, while Android Auto will soon be able to pair and integrate with Gemini Intelligence-compatible Androids. 

Finally, we also know that Google’s new Googlebook line-up will also benefit from Gemini Intelligence features, including the Magic Pointer and Create my Widgets.

Advertisement

Source link

Continue Reading

Trending

Copyright © 2025