Connect with us
DAPA Banner

Tech

Full List of All Brainrots in Steal a Brainrot

Published

on

In Steal a Brainrot, Brainrots, or characters are the main way you earn money in the game. You can collect them, place them in your base, and watch them earn money automatically. The game features different rarity levels, and higher-tier Brainrots can boost your earnings significantly. Let’s explore all the Brainrots (characters) you can find in Steal a Brainrot.

1. Common Brainrots

When you begin playing, Common Brainrots characters will be everywhere. They don’t give high earnings, but they’re the easiest to get your hands on. The best option here is Pipi Kiwi, which earns $13 per second. It’s basic, but useful in the early stage.

2. Rare Brainrots

Among Rare Brainrots characters, Pipi Avocado stands out as the best, earning $70 per second. These show up more often than higher-tier ones, making them a solid choice for boosting your early income.

Brainrot Icon Brainrot Name Cost Income per Second
Trippi Troppi $2,000 $15/s
Tung Tung Tung Sahur $3,000 $25/s
Gangster Footera $4,000 $30/s
Bandito Bobritto $4,500 $35/s
Boneca Ambalabu $5,000 $40/s
Cacto Hipopotamo $6,500 $50/s
Ta Ta Ta Ta Sahur $7,500 $55/s
Tric Trac Baraboom $9,000 $65/s
Frogo Elfo $9,200 $67/s
Pipi Avocado $9,500 $70/s
Pengolino Nuvoletto $9,600 $72/s

3. Epic Brainrots

Epic Brainrot characters are where your income really improves in Steal a Brainrot. Salamino Penguino earns $250 per second, making it a solid choice. These Brainrots are perfect for progressing through the middle stages of the game.

Brainrot Icon Brainrot Name Cost Income per Second
Cupcake Koala $8,000 $60/s
Pinealotto Fruttarino $9,700 $75/s
Cappuccino Assassino $10,000 $75/s
Bandito Axolito $12,500 $90/s
Brr Brr Patapim $15,000 $100/s
Avocadini Antilopini $17,500 $115/s
Trulimero Trulicina $20,000 $125/s
Bambini Crostini $22,500 $135/s
Malame Amarele $23,500 $140/s
Bananita Dolphinita $25,000 $150/s
Perochello Lemonchello $27,500 $160/s
Brri Brri Bicus Dicus Bombicus $30,000 $175/s
Avocadini Guffo $35,000 $225/s
Ti Ti Ti Sahur $37,500 $225/s
Mangolini Parrocini $38,500 $235/s
Frogato Pirato $39,000 $240/s
Salamino Penguino $40,000 $250/s
Gato Celesto $40,000 $250/s
Doi Doi Do $41,000 $260/s
Penguin Tree $42,000 $270/s
Wombo Rollo $42,500 $275/s
Penguino Cocosino $45,000 $300/s
Mummio Rappitto $47,500 $325/s

4. Legendary Brainrots

Steal a Brainrot gives Legendary Brainrot characters every 5 minutes, making it easier to grow your income. Sigma Boy leads this tier with $1,300 per second, which is far higher than what Common Brainrots offer.

Brainrot Icon Brainrot Name Cost Income per Second
Burbaloni Loliloli $35,000 $200/s
Chimpanzini Bananini $50,000 $300/s
Tirilikalika Tirilikalako $75,000 $450/s
Ballerina Cappuccina $100,000 $500/s
Chef Crabracadabra $150,000 $600/s
Lionel Cactuseli $175,000 $650/s
Glorbo Fruttodrillo $200,000 $750/s
Quivioli Ameleonni $225,000 $1,000/s
Blueberrinni Octopusini $250,000 $1,000/s
Clickerino Crabo $250,000 $1,000/s
Caramello Filtrello $255,000 $1,000/s
Pipi Potato $256,000 $1,100/s
Strawberrelli Flamingelli $275,000 $1,100/s
Cocosini Mama $285,000 $1,200/s
Pandaccini Bananini $300,000 $1,200/s
Quackula $310,000 $1,200/s
Pi Pi Watermelon $315,000 $1,200/s
Signore Carapace $320,000 $1,300/s
Sigma Boy $325,000 $1,300/s
Buho del Cielo $325,000 $1,300/s
Sigma Girl $325,000 $1,800/s
Chocco Bunny $327,500 $1,400/s
Puffaball $330,000 $1,500/s
Sealo Regalo $342,000 $1,800/s
Buho de Fuego $345,000 $1,400/s
Seraphino Gruyero $347,500 $1,900/s

5. Mythic Brainrots

Every 15 minutes, a Mythic Brainrot character is guaranteed to appear in Steal a Brainrot. Ganganzelli Trulala is the best among them, bringing in $9,000 per second. It performs even better when it gets a Trait.

Brainrot Icon Brainrot Name Cost Income per Second
Frigo Camelo $300,000 $1,200/s
Harpuccino $347,500 $14,000/s
Orangutini Ananassini $400,000 $1,700/s
Rhino Toasterino $450,000 $2,100/s
Bombardiro Crocodilo $500,000 $2,500/s
Brutto Gialutto $600,000 $3,000/s
Spioniro Golubiro (Lucky Block) $750,000 $3,500/s
Bombombini Gusini $1,000,000 $5,000/s
Zibra Zubra Zibralini (Lucky Block) $1,000,000 $6,000/s
Tigrilini Watermelini (Lucky Block) $1,000,000 $7,500/s
Avocadorilla $2,000,000 $7,500/s
Cavallo Virtuoso $2,500,000 $7,500/s
Te Te Te Sahur $2,500,000 $9,500/s
Gorillo Subwoofero $2,700,000 $7,700/s
Gorillo Watermelondrillo $3,000,000 $8,000/s
Stoppo Luminino $3,000,000 $8,000/s
Tracoducotulu Delapeladustuz $3,000,000 $12,000/s
Tob Tobi Tobi $3,500,000 $8,500/s
Lerulerulerule $3,500,000 $8,750/s
Ganganzelli Trulala $4,000,000 $9,000/s
Magi Ribbitini $4,000,000 $12,000/s
Rhino Helicopterino $4,100,000 $11,000/s
Jingle Jingle Sahur $4,300,000 $12,200/s
Los Noobinis $4,300,000 $12,500/s
Cachorrito Melonito $4,400,000 $13,000/s
Spongini Quackini $4,400,000 $15,000/s
Carloooo (Lucky Block) $4,500,000 $13,500/s
Elefanto Frigo $4,500,000 $14,000/s
Berenjello Angello $4,600,000 $18,000/s
Carrotini Brainini (Lucky Block) $4,700,000 $15,000/s
Centrucci Nuclucci $4,800,000 $15,500/s
Toiletto Focaccino $4,800,000 $16,000/s
Jacko Spaventosa $4,800,000 $16,200/s
Bananito Bandito $4,900,000 $16,500/s
Tree Tree Tree Sahur $4,900,000 $17,000/s

6. Brainrot God

If you want big earnings before Secrets, Brainrot God units are the way to go. Piccione Macchina generates $225,000 per second, which is huge. These show up when server goals are reached, so playing during busy periods gives you a better chance.

Brainrot Icon Brainrot Name Cost Income per Second
Fizzy Soda $4,900,000 $17,200/s
Cocofanto Elefanto $5,000,000 $10,000/s
Antonio $6,000,000 $18,500/s
Girafa Celestre $7,500,000 $20,000/s
Gattatino Nyanino $7,500,000 $35,000/s
Gattatino Neonino $7,500,000 $20,000/s
Chihuanini Taconini $8,500,000 $45,000/s
Matteo $10,000,000 $50,000/s
Tralalero Tralala $10,000,000 $50,000/s
Los Crocodillitos $12,500,000 $55,000/s
Tigroligre Frutonni (Lucky Block) $15,000,000 $60,000/s
Odin Din Din Dun $15,000,000 $75,000/s
Orcalero Orcala (Lucky Block) $15,000,000 $100,000/s
Money Money Man $17,500,000 $65,000/s
Alessio $17,500,000 $65,000/s
Unclito Samito $20,000,000 $65,000/s
Statutino Libertino $20,000,000 $75,000/s
Tipi Topi Taco $20,000,000 $75,000/s
Tralalita Tralala $20,000,000 $100,000/s
Tukanno Banana $22,500,000 $100,000/s
Extinct Ballerina $23,500,000 $125,000/s
Vampira Cappucina $24,500,000 $125,000/s
Espresso Signora $25,000,000 $70,000/s
Trenozosturzzo Turbo 3000 $25,000,000 $150,000/s
Bulbito Bandito Traktorito (Lucky Block) $25,000,000 $205,000/s
Urubini Flamenguini $30,000,000 $150,000/s
Jacko Jack Jack $30,000,000 $150,000/s
Trippi Troppi Troppa Trippa $30,000,000 $175,000/s
Capi Taco $31,000,000 $155,000/s
Los Chihuaninis $32,000,000 $160,000/s
Divino Platypio $32,000,000 $160,000/s
Gattito Tacoto $32,500,000 $165,000/s
Las Capuchinas $32,500,000 $185,000/s
Ballerino Lololo $35,000,000 $200,000/s
Los Tungtungtungcitos $37,500,000 $210,000/s
Pakrahmatmamat $37,500,000 $215,000/s
Ballerina Peppermintina $37,500,000 $215,000/s
Piccione Macchina $40,000,000 $225,000/s
Pakrahmatmatina $40,500,000 $225,000/s
Los Bombinitos $42,500,000 $220,000/s
Tractoro Dinosauro $42,500,000 $230,000/s
Brr Es Teh Patipum $45,000,000 $225,000/s
Cacasito Satalito $45,000,000 $240,000/s
Orcalita Orcala $45,000,000 $240,000/s
Aquanaut $45,500,000 $245,000/s
Tartaruga Cisterna $45,000,000 $250,000/s
Snailenzo $45,000,000 $250,000/s
Corn Corn Corn Sahur $45,000,000 $250,000/s
Squalanana $45,000,000 $250,000/s
Mummy Ambalabu $45,000,000 $250,000/s
Los Orcalitos $45,000,000 $310,000/s
Dug Dug Dug $45,500,000 $255,000/s
Ginger Globo $45,700,000 $257,500/s
Yeti Claus $45,700,000 $267,500/s
Crabbo Limonetta $46,000,000 $230,000/s
Los Tipi Tacos $46,000,000 $260,000/s
Tootini Shrimpini $46,000,000 $260,000/s
Granchiello Spiritell $46,000,000 $260,000/s
Frio Ninja $46,500,000 $265,000/s
Piccionetta Macchina $47,000,000 $270,000/s
Boba Panda $47,000,000 $270,000/s
Mastodontico Telepiedone (Lucky Block) $47,500,000 $275,000/s
Los Gattitos $47,500,000 $275,000/s
Bambu Bambu Sahur $47,500,000 $275,000/s
Chrismasmamat $47,700,000 $277,500/s
Anpali Babel $48,000,000 $280,000/s
TBA Astrolero Cervalero $48,000,000 $280,000/s
Cappuccino Clownino $48,000,000 $285,000/s
Luv Luv Luv $48,300,000 $282,000/s
Bombardini Tortinii $50,000,000 $225,000/s
Brasilini Berimbini $55,500,000 $285,000/s
Belula Beluga $60,000,000 $290,000/s
Krupuk Pagi Pagi $60,000,000 $290,000/s
Skull Skull Skull $60,000,000 $290,000/s
Cocoa Assassino $60,000,000 $291,000/s
Tentacolo Tecnico $62,500,000 $295,000/s
Ginger Cisterna $63,500,000 $293,500/s
Dolphini Jetskini $64,500,000 $294,500/s
Pop Pop Sahur $65,000,000 $295,000/s
Noo La Polizia $67,000,000 $280,000/s
Karkerheart Luvkur $67,500,000 $297,500/s
Dumborino Miracello $75,000,000 $315,000/s

7. Secret Brainrots

Secret Brainrot characters don’t show up often in Steal a Brainrot, but they are the best money-makers in the game. Spawning one depends on luck, but once it happens, you’re in a great position.

Brainrot Icon Brainrot Name Cost Income per Second
La Vacca Saturno Saturnita $50,000,000 $250,000/s
Pandanini Frostini $64,000,000 $294,000/s
Bisonte Giuppitere $75,000,000 $300,000/s
Blackhole Goat $75,000,000 $400,000/s
Jackorilla $80,000,000 $315,000/s
Agarrini Ia Palini $80,000,000 $425,000/s
Chachechi $85,000,000 $400,000/s
Karkerkar Kurkur $100,000,000 $275,000/s
Los Tortus $100,000,000 $500,000/s
Los Matteos $100,000,000 $300,000/s
Sammyni Spyderini $100,000,000 $300,000/s
Trenostruzzo Turbo 4000 $100,000,000 $310,000/s
Chimpanzini Spiderini $100,000,000 $325,000/s
Boatito Auratito $115,000,000 $525,000/s
Fragola La La La $125,000,000 $450,000/s
Dul Dul Dul $150,000,000 $375,000/s
La Vacca Prese Presente $160,000,000 $600,000/s
Frankentteo $175,000,000 $700,000/s
Los Trios $175,000,000 $700,000/s
Karker Sahur $185,000,000 $725,000/s
Torrtuginni Dragonfrutini (Lucky Block) $500,000,000 $350,000/s
Los Tralaleritos $100,000,000 $750,000/s
Zombie Tralala $100,000,000 $500,000/s
La Cucaracha $110,000,000 $475,000/s
Vulturino Skeletono $110,000,000 $500,000/s
Guerriro Digitale $120,000,000 $550,000/s
Extinct Tralalero $125,000,000 $450,000/s
Yess My Examine $130,000,000 $575,000/s
Extinct Matteo $140,000,000 $625,000/s
Las Tralaleritas $150,000,000 $650,000/s
Rocco Disco $150,000,000 $650,000/s
Reindeer Tralala $160,000,000 $600,000/s
Las Vaquitas Saturnitas $160,000,000 $750,000/s
Pumpkin Spyderini $165,000,000 $650,000/s
Job Job Job Sahur $175,000,000 $700,000/s
Los Karkeritos $200,000,000 $750,000/s
Graipuss Medussi $200,000,000 $1,000,000/s
Santteo $210,000,000 $800,000/s
La Vacca Jacko Linterino $225,000,000 $850,000/s
Triplito Tralaleritos $230,000,000 $875,000/s
Trickolino $235,000,000 $900,000/s
Paradiso Axolottino $235,000,000 $900,000/s
Giftini Spyderini $240,000,000 $999,900/s
Los Spyderinis $250,000,000 $450,000/s
Love Love Love Sahur $250,000,000 $1,000,000/s
Perrito Burrito $250,000,000 $1,000,000/s
1x1x1x1 $255,500,000 $1,100,000/s
Los Cucarachas $300,000,000 $1,200,000/s
Please My Present $350,000,000 $1,300,000/s
Cuadramat and Pakrahmatmamat $400,000,000 $1,400,000/s
Los Jobcitos $500,000,000 $1,500,000/s
Nooo My Hotspot $500,000,000 $2,000,000/s
Pot Hotspot (Lucky Block) $500,000,000 $2,500,000/s
Noo My Examine $525,000,000 $1,700,000/s
Telemorte $550,000,000 $2,000,000/s
La Sahur Combinasion $550,000,000 $2,000,000/s
List List List Sahur $550,000,000 $2,000,000/s
To To To Sahur $575,000,000 $2,500,000/s
Pirulitoita Bicicletaire $600,000,000 $2,500,000/s
25 $600,000,000 $2,500,000/s
Santa Hotspot $625,000,000 $2,600,000/s
Horegini Boom $650,000,000 $2,700,000/s
Quesadilla Crocodila $700,000,000 $3,000,000/s
Pot Pumpkin $700,000,000 $3,000,000/s
Naughty Naughty $700,000,000 $3,000,000/s
Cupid Cupid Sahur $715,000,000 $3,100,000/s
Ho Ho Ho Sahur $725,000,000 $3,200,000/s
Mi Gatito $725,000,000 $3,200,000/s
Chicleteira Bicicleteira $750,000,000 $3,500,000/s
Cupid Hotspot $750,000,000 $3,500,000/s
Spaghetti Tualetti (Lucky Block) $750,000,000 $6,000,000/s
Esok Sekolah (Lucky Block) $750,000,000 $3,000,000/s
Quesadillo Vampiro $750,000,000 $3,500,000/s
Brunito Marsito $750,000,000 $3,500,000/s
Chill Puppy $800,000,000 $4,000,000/s
Burrito Bandito $800,000,000 $4,000,000/s
Chicleteirina Bicicleteirina $850,000,000 $4,000,000/s
Los Quesadillas $875,000,000 $4,500,000/s
Bunito Bunito Spinito $900,000,000 $3,000,000/s
Noo My Candy $900,000,000 $5,000,000/s
Los Nooo My Hotspotsitos $1,000,000,000 $5,000,000/s
TBA Serafinna Medusella $1,000,000,000 $5,500,000/s
La Grande Combinassion $1,000,000,000 $10,000,000/s
Rang Ring Bus $1,100,000,000 $6,000,000/s
Guest 666 $1,100,000,000 $6,600,000/s
Los Mi Gatitos $1,200,000,000 $6,500,000/s
Los Chicleteiras $1,200,000,000 $7,000,000/s
67 $1,200,000,000 $7,500,000/s
Donkeyturbo Express $1,200,000,000 $7,500,000/s
Mariachi Corazoni $1,200,000,000 $12,500,000/s
Los Burritos $1,400,000,000 $8,500,000/s
Los 25 $1,500,000,000 $10,000,000/s
Tacorillo Crocodillo $1,500,000,000 $12,500,000/s
Swag Soda $1,800,000,000 $13,000,000/s
Noo my Heart $1,800,000,000 $13,000,000/s
Chimnino $1,900,000,000 $14,000,000/s
Los Combinasionas $2,000,000,000 $15,000,000/s
Chicleteira Noelteira $2,000,000,000 $15,000,000/s
Fishino Clownino $2,100,000,000 $15,500,000/s
Tacorita Bicicleta $2,200,000,000 $16,500,000/s
Los Sweethearts $2,200,000,000 $16,500,000/s
Spinny Hammy $2,300,000,000 $17,000,000/s
Nuclearo Dinosauro $2,500,000,000 $15,000,000/s
Las Sis $2,500,000,000 $17,500,000/s
Chicleteira Cupideira $2,500,000,000 $17,500,000/s
La Karkerkar Combinasion $2,500,000,000 $17,500,000/s
Chillin Chili $2,500,000,000 $25,000,000/s
Chipso and Queso $2,500,000,000 $25,000,000/s
Money Money Reindeer $2,500,000,000 $25,000,000/s
Money Money Puggy $2,600,000,000 $21,000,000/s
Celularcini Viciosini $2,600,000,000 $22,500,000/s
Los Planitos $2,700,000,000 $18,500,000/s
Los Mobilis $2,700,000,000 $22,000,000/s
Los 67 $2,700,000,000 $22,500,000/s
Mieteteira Bicicleteira $2,700,000,000 $26,000,000/s
Tuff Toucan $2,700,000,000 $26,000,000/s
La Spooky Grande $2,900,000,000 $24,500,000/s
Los Spooky Combinasionas $3,000,000,000 $20,000,000/s
Cigno Fulgoro $3,000,000,000 $20,000,000/s
Los Candies $3,000,000,000 $23,000,000/s
Los Hotspositos $3,000,000,000 $25,000,000/s
Los Jolly Combinasionas $3,000,000,000 $25,000,000/s
Los Cupids $3,000,000,000 $30,000,000/s
Los Puggies $3,000,000,000 $30,000,000/s
W or L $3,000,000,000 $30,000,000/s
Tralalalaledon $3,000,000,000 $37,500,000/s
La Extinct Grande Combinasion $3,200,000,000 $23,500,000/s
Tralaledon $3,500,000,000 $27,500,000/s
La Jolly Grande $3,500,000,000 $30,000,000/s
Los Primos $3,700,000,000 $31,000,000/s
Bacuru and Egguru $3,800,000,000 $24,000,000/s
Eviledon $3,800,000,000 $31,500,000/s
Los Tacoritas $4,000,000,000 $32,000,000/s
Lovin Rose $4,200,000,000 $32,500,000/s
Tang Tang Kelentang $4,500,000,000 $33,500,000/s
Ketupat Kepat $5,000,000,000 $35,000,000/s
Los Bros $6,000,000,000 $37,500,000/s
Tictac Sahur $6,000,000,000 $37,500,000/s
La Romantic Grande $7,000,000,000 $40,000,000/s
Gingerat Gerat $7,000,000,000 $40,000,000/s
Orcaledon $7,000,000,000 $40,000,000/s
Ketchuru and Masturu $7,500,000,000 $42,500,000/s
Jolly Jolly Sahur $8,000,000,000 $45,000,000/s
Garama and Madundung $10,000,000,000 $50,000,000/s
Rosetti Tualetti $10,000,000,000 $50,000,000/s
Nacho Spyder $10,000,000,000 $50,000,000/s
Festive 67 $16,000,000,000 $67,000,000/s
Sammyni Fattini $20,000,000,000 $70,000,000/s
Love Love Bear $23,000,000,000 $140,000,000/s
La Ginger Sekolah $23,000,000,000 $75,000,000/s
Spooky and Pumpky $25,000,000,000 $80,000,000/s
Lavadorito Spinito $30,000,000,000 $45,000,000/s
La Food Combinasion $30,000,000,000 $0/s
Los Spaghettis $40,000,000,000 $70,000,000/s
La Casa Boo $40,000,000,000 $100,000,000/s
Fragrama and Chocrama $40,000,000,000 $100,000,000/s
Los Sekolahs $45,000,000,000 $110,000,000/s
La Secret Combinasion $50,000,000,000 $125,000,000/s
Los Amigos $55,000,000,000 $130,000,000/s
Reinito Sleighito $60,000,000,000 $140,000,000/s
Ketupat Bros $65,000,000,000 $145,000,000/s
Burguro and Fryuro $75,000,000,000 $150,000,000/s
Cooki and Milki $100,000,000,000 $155,000,000/s
Capitano Moby $125,000,000,000 $160,000,000/s
Rosey and Teddy $130,000,000,000 $165,000,000/s
Popcuru and Fizzuru $135,000,000,000 $170,000,000/s
Celestial Pegasus $150,000,000,000 $150,000,000/s
Cerberus $150,000,000,000 $175,000,000/s
La Supreme Combinasion $200,000,000,000 $200,000,000/s
Dragon Cannelloni $250,000,000,000 $250,000,000/s
Dragon Gingerini $300,000,000,000 $300,000,000/s
Headless Horseman $325,000,000,000 $325,000,000/s
Hydra Dragon Cannelloni $350,000,000,000 $350,000,000/s
Griffin $400,000,000,000 $400,000,000/s

8. OG Brainrots

OG Brainrot characters are the rarest units in Steal a Brainrot, even harder to find than Secrets. If one spawns, it starts an event and gains a strong mutation. Currently, only one OG exists.

Brainrot Icon Brainrot Name Cost Income per Second
Skibidi Toilet $400,000,000,000 $400,000,000/s
Meowl $500,000,000,000 $500,000,000/s
Strawberry Elephant $750,000,000,000 $750,000,000/s

Source link

Advertisement
Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

NeuReality appoints ex-Google AI director as adviser

Published

on

When Jensen Huang told 30,000 attendees at GTC last week that the future data centre is a “token factory,” he was describing a world that a small Israeli startup has been quietly building toward for months. NeuReality, the Caesarea-based company behind the NR-NEXUS inference operating system, has appointed Shalini Agarwal, a product management director at Google Labs, as a strategic adviser charged with shaping how NR-NEXUS reaches enterprise buyers, according to a press release distributed on Monday.

The hire signals a shift in ambition for a company that began life designing custom silicon for AI inference and has since pivoted toward software that promises to turn fragmented GPU clusters into production-grade inference engines.

Agarwal brings roughly two decades of experience in product strategy across major technology companies. At Google Labs, she has directed product management for AI-focused initiatives. Before that, she spent nearly a decade at eBay, according to publicly available professional records, and holds a degree in computer science, electrical engineering, and management science from MIT. Her appointment is advisory rather than operational, but it places a recognisable Silicon Valley name alongside NeuReality’s existing leadership: co-founder and CEO Moshe Tanach and president Hiren Majmudar, a former GlobalFoundries and Intel Capital executive who joined in September 2024.

The timing is deliberate. On 12 March, NeuReality unveiled NR-NEXUS, describing it as a hardware-agnostic operating system for what the company calls AI factories. The platform disaggregates prefill and decode tasks across heterogeneous hardware, including GPUs, CPUs, and network interface cards, aiming to squeeze more useful work out of expensive accelerators that often sit partially idle. Beta customers are already running the software, according to the company, though NeuReality has not disclosed which organisations are in the programme.

Advertisement

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

The product arrives at a moment when inference economics have become one of the most closely watched metrics in enterprise AI. Deloitte estimates that inference workloads accounted for half of all AI compute in 2025 and will reach two-thirds this year. Hyperscalers are responding with enormous capital expenditure, with Amazon projecting $200 billion in 2026 spending and Google budgeting between $175 billion and $185 billion, according to recent earnings disclosures. Yet much of that investment flows through a small number of vertically integrated stacks, leaving enterprises that want to run inference across mixed hardware with limited options.

That gap is where NeuReality is placing its bet. NR-NEXUS is designed to work across any CPU, GPU, or NIC, including NVIDIA’s forthcoming Vera Rubin architecture, and targets three buyer categories: neocloud providers, enterprises building their own inference capacity, and semiconductor vendors looking to offer a complete software layer atop their chips.

Advertisement

The company has raised approximately $70 million to date. A $35 million Series A in late 2022, led by Samsung Ventures with participation from OurCrowd and SK Hynix among others, was followed by a $20 million round in March 2024 anchored by the European Innovation Council Fund and existing investors. That EU backing positioned NeuReality as part of a broader European push to develop sovereign AI infrastructure, though the company’s engineering centre remains in Israel.

Agarwal’s advisory role appears focused on go-to-market strategy rather than product engineering, a recognition that building an inference operating system is only half the challenge. The other half is persuading infrastructure buyers, many of whom have deep relationships with NVIDIA’s own software ecosystem, that a startup’s orchestration layer is worth the integration effort.

Whether NR-NEXUS can gain traction will depend on execution in a market that is attracting well-funded competition. Modal Labs is raising at a reported $2.5 billion valuation. Baseten announced a $300 million round at $5 billion. Fireworks AI secured $250 million. Each approaches inference optimisation from a slightly different angle, but all are chasing the same fundamental opportunity: as AI moves from training to deployment, whoever controls the inference layer controls a growing share of the value chain.

For NeuReality, the appointment of an adviser with Google-grade product instincts may be a modest move on paper. In practice, it is a bet that the next phase of AI infrastructure will reward companies that can bridge the gap between silicon and the enterprises that need to run models at scale, efficiently, and across hardware they already own.

Advertisement

Source link

Continue Reading

Tech

BGIS 2026 Grand Finals: Complete Guide to Date, Venue, and Matches

Published

on

The BGIS 2026 Grand Finals will be the final stage of India’s biggest BGMI esports tournament happening this year. As the ultimate stage of the tournament, the Grand Finals events determine this year’s ultimate champion. The Grand Finals match will be held between March 27 and March 29, 2026, at the Chennai Trade Centre, making it a must-watch event for esports fans. It is a LAN event. That means all participating teams will play the game live on stage before an audience.

BGIS 2026 Grand Finals Format

The format of the BGIS 2026 Grand Finals is 16 teams, 18 matches over 3 days. 6 matches per day means that every match is critical. Therefore, consistency throughout the event will determine the champion.

Furthermore, the BGIS 2026 will feature 16 teams. 8 teams qualified through the Semi-Finals (March 12–15), and the other 8 teams made it through the Survival Stage (March 16–17). These teams will face each other in the finals scheduled from March 27 to 29, 2026.

Qualified Teams List

BGIS Logo
  • Team Soul
  • Orangutan
  • Genesis Esports
  • Learn From Past (LEFP)
  • Reckoning Esports
  • Revenant XSpark
  • Victores Sumus
  • Meta Ninza
  • GodLike Esports
  • Welt Esports
  • Nebula Esports
  • Myth Official
  • Wyld Fangs
  • K9 Esports
  • Team Tamilas
  • Vasista Esports

Prize Pool Breakdown

BGIS 2026 has a record-breaking ₹4 Crore prize pool, making it the biggest BGMI event so far. The champions will receive ₹1 Crore, while other teams will also earn significant rewards.

Position Prize Money (INR)
Champions ₹1,00,00,000 (1 Crore)
Runner-Up ₹50,00,000
3rd Place ₹35,00,000
4th Place ₹25,00,000
5th Place ₹20,50,000
6th Place ₹16,00,000
7th – 8th ₹14,00,000 each
9th – 10th ₹11,50,000 each
11th – 12th ₹10,00,000 each
13th – 14th ₹9,00,000 each
15th – 16th ₹8,00,000 each

How to Watch BGIS 2026 Grand Finals Live

You can watch all BGIS 2026 Grand Finals matches live on the official Krafton India Esports YouTube channel starting at 2:30 PM IST. In addition, if you want to attend the event in person, tickets are available through Swiggy Scenes. BGIS 2026 was played across different stages before reaching the final round.

Advertisement
  • The tournament started with the Grind from January 17 to February 1, 2026.
  • After that, teams competed in the Semi-Finals from March 12 to 15 in Hyderabad, where the top teams secured their spots.
  • Next, teams played the Survival Stage on March 16 and 17 in Hyderabad to earn the remaining slots.
  • Finally, the top teams will compete in the Grand Finals from March 27 to 29 in Chennai to decide the champion.

Source link

Continue Reading

Tech

Zalos raises $3.6M to automate finance workflows

Published

on

The YC Fall 2025 startup, founded by a former Agicap GM and a former Apple Pay engineer, converts screen recordings of finance workflows directly into computer agents, no API integration required. 14 Peaks led the round, with Cohen Circle and 20VC participating.


The CFO’s software stack is both the problem and the constraint. Enterprise finance teams typically run on a combination of ERPs, CRMs, spreadsheets, email, and banking platforms that were built at different times, by different vendors, for different purposes.

APIs between these systems are often incomplete or absent, which means finance teams absorb the integration gap themselves, manually downloading, reformatting, uploading, and reconciling data across systems to complete tasks that should be automated.

Zalos, a San Francisco and London-based startup that emerged from Y Combinator’s Fall 2025 batch, has raised $3.6 million on the thesis that the fix is not a new ERP but a new kind of agent that operates the existing stack the way a human analyst would.

Advertisement

The round is led by 14 Peaks, the Swiss venture capital firm, with participation from Cohen Circle and 20VC. The angel list is notable for its domain specificity: Mike Lenz, CFO of FedEx; Ian Sutherland, CFO of UK business bank Tide; Paul Forster, founder of Indeed; and others with backgrounds in finance software, accounts payable, and enterprise infrastructure. 

Advertisement

The technical approach is unusually direct. Rather than requiring API integrations or custom connectors, Zalos trains agents from screen recordings of the actual workflows finance teams run inside their existing tools.

A billing cycle recorded in NetSuite, a reconciliation process in SAP S/4HANA, or a month-end close in Sage becomes the training input. The agent then replicates that sequence, logging in with a username and password, navigating screens, entering data, handling two-factor authentication, without any modification to the underlying system.

Every action is captured in an auditable log, and the platform holds SOC 2 Part II certification. The avoidance of API dependency is the commercial insight: most enterprise automation efforts in finance stall because the APIs don’t exist, don’t expose the right data, or require months of integration work before anything runs.

The two founders arrived at the same conclusion from different directions. William Fairbairn, CEO, spent years as UK General Manager at Agicap, a CFO-focused software company valued at around $800 million, where he had hundreds of conversations with finance leaders whose consistent frustration was ERP implementation: projects that take more than a year, deliver modest upside when successful, and carry real career consequences when they go wrong.

Advertisement

Hung Hoang, CTO, spent five years at Apple, working on Apple Pay’s Buy Now Pay Later product and other AI initiatives, and became focused on computer agents partly through work at Twin, a lab focused specifically on the technology. The two met at Y Combinator and began building Zalos in October 2025.

The market positioning is clear but contested. OpenAI’s Operator and Anthropic’s computer use capabilities both operate at the general-purpose layer, agents that can perform tasks across any interface.

Zalos is making a different bet: that finance operations require accuracy levels, audit trails, and domain-specific skills (Excel manipulation, ERP navigation, categorisation logic) that general-purpose agents cannot reliably provide. The company’s current customers are in midmarket and enterprise finance teams; it plans to expand into additional enterprise ERPs and on-premise systems with the new capital.

Advertisement

Source link

Continue Reading

Tech

From Zip To Nought: The Rise And Fall Of Iomega

Published

on

If you were anywhere near a computer in the mid-to-late 1990s, you almost certainly encountered a Zip drive. That distinctive purple peripheral, with its satisfying clunk as you slotted in a cartridge, was as much a fixture of the era as beige tower cases and CRT monitors. Iomega, the company behind it, went from an obscure Utah outfit to a multi-billion-dollar darling of Wall Street in the span of about two years. And then, almost as quickly, it all fell apart.

The story of Iomega is one of genuine engineering innovation and the fickle nature of consumer technology. As with so many other juggernauts of its era, Iomega was eventually brought down by a new technology that simply wasn’t practical to counter.

The House That Bernoulli Built

Iomega was founded in Utah, in 1980, by Jerome Paul Johnson, David Bailey, and David Norton. The company soon developed a novel approach to removable magnetic storage based on the Bernoulli effect. The Bernoulli Box arrived in 1982, which was a drive relying on PET film disks spun at 1500 RPM inside a rigid, removable cartridge. The airflow generated by the spinning disk pulled the media down toward the read/write head thanks to the eponymous Bernoulli effect. While spinning, the disk would float a mere micron above the head surface on a cushion of air. If the power cut out or the drive otherwise failed, the disk simply floated away from the head rather than crashing into it—a boon over contemporary hard drives for which head crashes were a real risk. The Bernoulli Box made them essentially impossible.

Early Bernoulli Box drives offered 10 MB and 20 MB of removable storage at a time when a fixed hard drive might hold 30 MB. Bernoulli Boxes were never really aimed at the home market, but found a devoted following among power users—publishers, CAD users, and anyone who needed to move serious amounts of data between machines. Sales were strong, and by 1983, Iomega hit the stock market running with an initial public offering raising $21.7 million.

Advertisement

As hard drive prices continued to dive over time due to economies of scale, though, the expensive Bernoulli Box became a less attractive proposition even despite its portability and greater storage. By 1986, Iomega had sold over 70,000 units and more than a million cartridges, but sales had peaked. The company had racked up serious debt and slow sales left the company saddled with undesirable inventory that wouldn’t move. Upgrades came thick and fast as Iomega pushed to keep up with the rapidly-changing storage market, which was enough to keep Iomega relevant if not flourishing. By 1993, the largest Bernoulli carts could hold 230 MB if you had a suitable model drive to read them, though the expensive drives mostly remained the domain of large corporate and government users.

Zipping To The Top

The Iomega Zip drive became a popular way to move large amounts of data in an era when floppy drives were starting to become painfully small. Credit: Yuri Litvinenko, CC BY 2.0

The next phase for Iomega saw the company reach its greatest peak. The Zip drive launched in March 1995, and aimed to be a more affordable solution to high-capacity removable storage. It hit the market with 100 MB cartridges priced at $19.95 each, in an era when the standard 3.5” floppy could only hold 1.44 MB. For anyone regularly shuffling large files between home and office, or backing up a hard drive that might only hold a few hundred megabytes, it was a great leap forward. The iconic external model became popular in businesses, universities, and homes, and before long, OEMs like Apple, Dell, and Gateway started offering internal Zip drives as factory options. It became as close to a defacto standard for removable storage as a proprietary storage solution could ever be.

Later models of Zip drive would expand the storage capacity to 250 MB and later 750 MB. Many OEM manufacturers would offer internal Zip drives as an option, though they never reached the market penetration of the mainstream 3.5″ floppy drive. Credit: Tomchiukc, public domain

When the Zip drive hit, the sales numbers were staggering. Iomega’s revenue leapt from $362 million in 1995 to $1.2 billion in 1996. At its peak, Iomega was valued at nearly $7 billion. The company’s stock became a darling of investors addicted to massive gains. For a time, they appeared to be an unstoppable tech juggernaut, hanging on to a sizable chunk of the removable storage market without any obvious competitors on the horizon.

The Jaz drive was Iomega’s heavier-duty portable storage solution, using hard disk-like platters in a portable cartridge. Credit: WillMcC, CC BY-SA 3.0

Iomega chased the success of the Zip drive with the even higher-capacity Jaz drive, which could store 1 GB in early models on hefty cartridges that contained rigid drive platters not dissimilar from those in contemporary hard disks. They were a great solution for power users moving what was then considered a lot of data, but their higher price meant they were never a consumer-grade darling like the cheaper Zip drive itself. The later “Clik” or “PocketZip” drive came along later in 1999, with a diminutive form factor and 40 MB disks. It too failed to gain the foothold of Zip, however, with a low install base limiting the usefulness of the removable format.

It wasn’t all smooth sailing, of course. A serious blow to Iomega’s reputation came from its own engineering. Some Zip drives developed a fault that came to be known as the “Click of Death.” The term referred to a clicking sound of the drive heads bouncing off their end stops when they became misaligned. In extreme cases, misaligned heads in a bad drive could damage disks, which would then damage the next drive they were used in. It was a mark against the technology that was supposed to be robust enough to be used as mobile storage. A class action lawsuit was filed in September 1998 and eventually settled in 2001, but the reputational damage remained.

Downfall

It wasn’t the Click of Death debacle that doomed Iomega, though. It was merely the march of competing technologies that made its storage solutions less attractive over time. CD-R drives, which had been expensive curiosities in the mid-1990s, became dirt cheap just a few years later. By 2000, blank CD-Rs were retailing for as little as fifty cents each, and they held 650 MB a pop— more than six times the capacity of a Zip disk, on media that cost a fraction of the price and didn’t require proprietary hardware. They were so cheap, the write-once nature almost failed to matter. It was far more attractive to many customers to just burn another cheap CD that anyone could read than to go out and buy a Zip drive, an expensive 100 MB disk, and hope that whoever you were sending the disk to also had a drive that could read it. The CD-RW followed soon enough after, and writable DVDs would then take storage capacities well into the multi-gigabyte range. Zip drives jumped to 250 MB and then 750 MB, while the Jaz line was upgraded to 2 GB, but by and large, consumers were choosing writable optical discs over Iomega’s proprietary solutions.

USB flash drives would then prove to be the final nail in the coffin. They were compact and cheap, and required no special hardware whatsoever. You could just plug them into any USB port on any computer and your files were right there. They too would become cheap enough to be disposable, in a way that Iomega’s bespoke drives and mechanically-complicated cartridges would never be.

Advertisement
Iomega’s sales charts tell the story—Zip drives quickly fell out of fashion in the early 2000s as cheaper alternatives started to dominate the market. Credit: Rubberkeith, CC BY-SA 3.0

By 2002, the Jaz drive was dead, and the Zip drive followed soon after in 2003. It was CD burners that did the most damage, with the leap to DVD and the rising prominence of the USB drive that promised there would be no way back for removable magnetic cartridge media. These solutions were far less mechanically complex and a lot cheaper in terms of cost per megabyte.

Iomega was, at this point, a lumbering corporation with hundreds of employees, a dying product line, and a bleak future ahead. The company pivoted to other storage solutions, like selling rebranded optical disc drives, external hard drives, and network-attached storage devices. However, none of these products were particularly unique or competitive, as Iomega went from dominating a specific niche to fighting in a market segment where it had no particular competitive advantage. They became a small, sickly fish in a big pond, competing against dozens of other established storage brands that were far more renowned in their fields.

Iomega’s last few products were either rebranded hardware or otherwise unexceptional NAS devices. Credit: via eBay

The end came in April 2008, when EMC Corporation announced plans to acquire Iomega for $213 million — a tiny fraction of the company’s peak valuation. EMC saw lingering value in Iomega’s small office and home office customer base, and kept the brand alive for a few years, slapping the Iomega name on NAS boxes and media adapters. These weren’t iconic products unique to the brand, so much as middle-of-the-road options that had no technical edge or promise to speak of. In 2013, EMC formed a joint venture with Lenovo called LenovoEMC. Iomega’s remaining products were rebadged accordingly, and the brand effectively ceased to exist. There was no reason to continue Iomega, because what it was built to do was simply no longer relevant in the modern marketplace.

The Iomega story is, in many ways, the archetypal cautionary tale of the consumer technology industry. In the 1990s, the company identified a genuine need—affordable, portable, high-capacity removable storage. It nailed this brief with the Zip drive, which propelled the company’s fortunes into the stratosphere. However, the entire business hinged on a product category that had a shelf life measured in years. Iomega simply couldn’t hold on to its edge in removable storage against so many competitors that were both cheaper and more practical. It’s the same death that Blockbuster died—fail to see the future, and you will inevitably succumb to it.

Source link

Advertisement
Continue Reading

Tech

The three disciplines separating AI agent demos from real-world deployment

Published

on

Getting AI agents to perform reliably in production — not just in demos — is turning out to be harder than enterprises anticipated. Fragmented data, unclear workflows, and runaway escalation rates are slowing deployments across industries.

“The technology itself often works well in demonstrations,” said Sanchit Vir Gogia, chief analyst with Greyhound Research. “The challenge begins when it is asked to operate inside the complexity of a real organization.” 

Burley Kawasaki, who oversees agent deployment at Creatio, and team have developed a methodology built around three disciplines: data virtualization to work around data lake delays; agent dashboards and KPIs as a management layer; and tightly bounded use-case loops to drive toward high autonomy.

In simpler use cases, Kawasaki says these practices have enabled agents to handle up to 80-90% of tasks on their own. With further tuning, he estimates they could support autonomous resolution in at least half of use cases, even in more complex deployments.

Advertisement

“People have been experimenting a lot with proof of concepts, they’ve been putting a lot of tests out there,” Kawasaki told VentureBeat. “But now in 2026, we’re starting to focus on mission-critical workflows that drive either operational efficiencies or additional revenue.”

Why agents keep failing in production

Enterprises are eager to adopt agentic AI in some form or another — often because they’re afraid to be left out, even before they even identify real-world tangible use cases — but run into significant bottlenecks around data architecture, integration, monitoring, security, and workflow design. 

The first obstacle almost always has to do with data, Gogia said. Enterprise information rarely exists in a neat or unified form; it is spread across SaaS platforms, apps, internal databases, and other data stores. Some are structured, some are not. 

But even when enterprises overcome the data retrieval problem, integration is a big challenge. Agents rely on APIs and automation hooks to interact with applications, but many enterprise systems were designed long before this kind of autonomous interaction was a reality, Gogia pointed out. 

Advertisement

This can result in incomplete or inconsistent APIs, and systems can respond unpredictably when accessed programmatically. Organizations also run into snags when they attempt to automate processes that were never formally defined, Gogia said. 

“Many business workflows depend on tacit knowledge,” he said. That is, employees know how to resolve exceptions they’ve seen before without explicit instructions — but, those missing rules and instructions become startlingly obvious when workflows are translated into automation logic.

The tuning loop

Creatio deploys agents in a “bounded scope with clear guardrails,” followed by an “explicit” tuning and validation phase, Kawasaki explained. Teams review initial outcomes, adjust as needed, then re-test until they’ve reached an acceptable level of accuracy. 

That loop typically follows this pattern: 

Advertisement
  • Design-time tuning (before go-live): Performance is improved through prompt engineering, context wrapping, role definitions, workflow design, and grounding in data and documents. 

  • Human-in-the-loop correction (during execution): Devs approve, edit, or resolve exceptions. In instances where humans have to intervene the most (escalation or approval), users establish stronger rules, provide more context, and update workflow steps; or, they’ll narrow tool access. 

  • Ongoing optimization (after go-live): Devs continue to monitor exception rates and outcomes, then tune repeatedly as needed, helping to improve accuracy and autonomy over time. 

Kawasaki’s team applies retrieval-augmented generation to ground agents in enterprise knowledge bases, CRM data, and other proprietary sources. 

Once agents are deployed in the wild, they are monitored with a dashboard providing performance analytics, conversion insights, and auditability. Essentially, agents are treated like digital workers. They have their own management layer with dashboards and KPIs.

For instance, an onboarding agent will be incorporated as a standard dashboard interface providing agent monitoring and telemetry. This is part of the platform layer — orchestration, governance, security, workflow execution, monitoring, and UI embedding —  that sits “above the LLM,” Kawasaki said.

Users see a dashboard of agents in use and each of their processes, workflows, and executed results. They can “drill down” into an individual record (like a referral or renewal) that shows a step-by-step execution log and related communications to support traceability, debugging, and agent tweaking. The most common adjustments involve logic and incentives, business rules, prompt context, and tool access, Kawasaki said. 

Advertisement

The biggest issues that come up post-deployment: 

  • Exception handling volume can be high: Early spikes in edge cases often occur until guardrails and workflows are tuned. 

  • Data quality and completeness: Missing or inconsistent fields and documents can cause escalations; teams can identify which data to prioritize for grounding and which checks to automate.

  • Auditability and trust: Regulated customers, particularly, require clear logs, approvals, role-based access control (RBAC), and audit trails.

“We always explain that you have to allocate time to train agents,” Creatio’s CEO Katherine Kostereva told VentureBeat. “It doesn’t happen immediately when you switch on the agent, it needs time to understand fully, then the number of mistakes will decrease.” 

“Data readiness” doesn’t always require an overhaul

When looking to deploy agents, “Is my data ready?,” is a common early question. Enterprises know data access is important, but can be turned off by a massive data consolidation project. 

But virtual connections can allow agents access to underlying systems and get around typical data lake/lakehouse/warehouse delays. Kawasaki’s team built a platform that integrates with data, and is now working on an approach that will pull data into a virtual object, process it, and use it like a standard object for UIs and workflows. This way, they don’t have to “persist or duplicate” large volumes of data in their database. 

Advertisement

This technique can be helpful in areas like banking, where transaction volumes are simply too large to copy into CRM, but are “still valuable for AI analysis and triggers,” Kawasaki said.

Once integrations and virtual objects are established, teams can evaluate data completeness, consistency, and availability, and identify low-friction starting points (like document-heavy or unstructured workflows). 

Kawasaki emphasized the importance of “really using the data in the underlying systems, which tends to actually be the cleanest or the source of truth anyway.” 

Matching agents to the work

The best fit for autonomous (or near-autonomous) agents are high-volume workflows with “clear structure and controllable risk,” Kawasaki said. For instance, document intake and validation in onboarding or loan preparation, or standardized outreach like renewals and referrals.

Advertisement

“Especially when you can link them to very specific processes inside an industry — that’s where you can really measure and deliver hard ROI,” he said. 

For instance, financial institutions are often siloed by nature. Commercial lending teams perform in their own environment, wealth management in another. But an autonomous agent can look across departments and separate data stores to identify, for instance, commercial customers who might be good candidates for wealth management or advisory services.

“You think it would be an obvious opportunity, but no one is looking across all the silos,” Kawasaki said. Some banks that have applied agents to this very scenario have seen “benefits of millions of dollars of incremental revenue,” he claimed, without naming specific institutions. 

However, in other cases — particularly in regulated industries — longer-context agents are not only preferable, but necessary. For instance, in multi-step tasks like gathering evidence across systems, summarizing, comparing, drafting communications, and producing auditable rationales.

Advertisement

“The agent isn’t giving you a response immediately,” Kawasaki said. “It may take hours, days, to complete full end-to-end tasks.” 

This requires orchestrated agentic execution rather than a “single giant prompt,” he said. This approach breaks work down into deterministic steps to be performed by sub-agents. Memory and context management can be maintained across various steps and time intervals. Grounding with RAG can help keep outputs tied to approved sources, and users have the ability to dictate expansion to file shares and other document repositories.

This model typically doesn’t require custom retraining or a new foundation model. Whatever model enterprises use (GPT, Claude, Gemini), performance improves through prompts, role definitions, controlled tools, workflows, and data grounding, Kawasaki said. 

The feedback loop puts “extra emphasis” on intermediate checkpoints, he said. Humans review intermediate artifacts (such as summaries, extracted facts, or draft recommendations) and correct errors. Those can then be converted into better rules and retrieval sources, narrower tool scopes, and improved templates. 

Advertisement

“What is important for this style of autonomous agent, is you mix the best of both worlds: The dynamic reasoning of AI, with the control and power of true orchestration,” Kawasaki said.

Ultimately, agents require coordinated changes across enterprise architecture, new orchestration frameworks, and explicit access controls, Gogia said. Agents must be assigned identities to restrict their privileges and keep them within bounds. Observability is critical; monitoring tools can record task completion rates, escalation events, system interactions, and error patterns. This kind of evaluation must be a permanent practice, and agents should be tested to see how they react when encountering new scenarios and unusual inputs. 

“The moment an AI system can take action, enterprises have to answer several questions that rarely appear during copilot deployments,” Gogia said. Such as: What systems is the agent allowed to access? What types of actions can it perform without approval? Which activities must always require a human decision? How will every action be recorded and reviewed?

“Those [enterprises] that underestimate the challenge often find themselves stuck in demonstrations that look impressive but cannot survive real operational complexity,” Gogia said. 

Advertisement

Source link

Continue Reading

Tech

What is DeerFlow 2.0 and what should enterprises know about this new, powerful local AI agent orchestrator?

Published

on

ByteDance, the Chinese tech giant behind TikTok, last month released what may be one of the most ambitious open-source AI agent frameworks to date: DeerFlow 2.0. It’s now going viral across the machine learning community on social media. But is it safe and ready for enterprise use?

This is a so-called “SuperAgent harness” that orchestrates multiple AI sub-agents to autonomously complete complex, multi-hour tasks. Best of all: it is available under the permissive, enterprise-friendly standard MIT License, meaning anyone can use, modify, and build on it commercially at no cost.

DeerFlow 2.0 is designed for high-complexity, long-horizon tasks that require autonomous orchestration over minutes or hours, including conducting deep research into industry trends, generating comprehensive reports and slide decks, building functional web pages, producing AI-generated videos and reference images, performing exploratory data analysis with insightful visualizations, analyzing and summarizing podcasts or video content, automating complex data and content workflows, and explaining technical architectures through creative formats like comic strips.

ByteDance offers a bifurcated deployment strategy that separates the orchestration harness from the AI inference engine. Users can run the core harness directly on a local machine, deploy it across a private Kubernetes cluster for enterprise scale, or connect it to external messaging platforms like Slack or Telegram without requiring a public IP.

Advertisement

While many opt for cloud-based inference via OpenAI or Anthropic APIs, the framework is natively model-agnostic, supporting fully localized setups through tools like Ollama. This flexibility allows organizations to tailor the system to their specific data sovereignty needs, choosing between the convenience of cloud-hosted “brains” and the total privacy of a restricted on-premise stack.

Importantly, choosing the local route does not mean sacrificing security or functional isolation. Even when running entirely on a single workstation, DeerFlow still utilizes a Docker-based “AIO Sandbox” to provide the agent with its own execution environment.

This sandbox—which contains its own browser, shell, and persistent filesystem—ensures that the agent’s “vibe coding” and file manipulations remain strictly contained. Whether the underlying models are served via the cloud or a local server, the agent’s actions always occur within this isolated container, allowing for safe, long-running tasks that can execute bash commands and manage data without risk to the host system’s core integrity.

Since its release last month, it has accumulated more than 39,000 stars (user saves) and 4,600 forks — a growth trajectory that has developers and researchers alike paying close attention.

Advertisement

Not a chatbot wrapper: what DeerFlow 2.0 actually is

DeerFlow is not another thin wrapper around a large language model. The distinction matters.

While many AI tools give a model access to a search API and call it an agent, DeerFlow 2.0 gives its agents an actual isolated computer environment: a Docker sandbox with a persistent, mountable filesystem.

The system maintains both short- and long-term memory that builds user profiles across sessions. It loads modular “skills” — discrete workflows — on demand to keep context windows manageable. And when a task is too large for one agent, a lead agent decomposes it, spawns parallel sub-agents with isolated contexts, executes code and Bash commands safely, and synthesizes the results into a finished deliverable.

It is similar to the approach being pursued by NanoClaw, an OpenClaw variant, which recently partnered with Docker itself to offer enterprise-grade sandboxes for agents and subagents.

Advertisement

But while NanoClaw is extremely open ended, DeerFlow has more clearly defined its architecture and scoped tasks: Demos on the project’s official site, deerflow.tech, showcase real outputs: agent trend forecast reports, videos generated from literary prompts, comics explaining machine learning concepts, data analysis notebooks, and podcast summaries.

The framework is designed for tasks that take minutes to hours to complete — the kind of work that currently requires a human analyst or a paid subscription to a specialized AI service.

From Deep Research to Super Agent

DeerFlow’s original v1 launched in May 2025 as a focused deep-research framework. Version 2.0 is something categorically different: a ground-up rewrite on LangGraph 1.0 and LangChain that shares no code with its predecessor. ByteDance explicitly framed the release as a transition “from a Deep Research agent into a full-stack Super Agent.”

New in v2: a batteries-included runtime with filesystem access, sandboxed execution, persistent memory, and sub-agent spawning; progressive skill loading; Kubernetes support for distributed execution; and long-horizon task management that can run autonomously across extended timeframes.

Advertisement

The framework is fully model-agnostic, working with any OpenAI-compatible API. It has strong out-of-the-box support for ByteDance’s own Doubao-Seed models, as well as DeepSeek v3.2, Kimi 2.5, Anthropic’s Claude, OpenAI’s GPT variants, and local models run via Ollama. It also integrates with Claude Code for terminal-based tasks, and with messaging platforms including Slack, Telegram, and Feishu.

Why it’s going viral now

The project’s current viral moment is the result of a slow build that accelerated sharply this week.

The February 28 launch generated significant initial buzz, but it was coverage in machine learning media — including deeplearning.ai’s The Batch — over the following two weeks that built credibility in the research community.

Then, on March 21, AI influencer Min Choi posted to his large X following: “China’s ByteDance just dropped DeerFlow 2.0. This AI is a super agent harness with sub-agents, memory, sandboxes, IM channels, and Claude Code integration. 100% open source.” The post earned more than 1,300 likes and triggered a cascade of reposts and commentary across AI Twitter.

Advertisement

A search of X using Grok uncovered the full scope of that response. Influencer Brian Roemmele, after conducting what he described as intensive personal testing, declared that “DeerFlow 2.0 absolutely smokes anything we’ve ever put through its paces” and called it a “paradigm shift,” adding that his company had dropped competing frameworks entirely in favor of running DeerFlow locally. “We use 2.0 LOCAL ONLY. NO CLOUD VERSION,” he wrote.

More pointed commentary came from accounts focused on the business implications. One post from @Thewarlordai, published March 23, framed it bluntly: “MIT licensed AI employees are the death knell for every agent startup trying to sell seat-based subscriptions. The West is arguing over pricing while China just commoditized the entire workforce.”

Another widely shared post described DeerFlow as “an open-source AI staff that researches, codes and ships products while you sleep… now it’s a Python repo and ‘make up’ away.”

Cross-linguistic amplification — with substantive posts in English, Japanese, and Turkish — points to genuine global reach rather than a coordinated promotion campaign, though the latter is not out of the question and may be contributing to the current virality.

Advertisement

The ByteDance question

ByteDance’s involvement is the variable that makes DeerFlow’s reception more complicated than a typical open-source release.

On the technical merits, the open-source, MIT-licensed nature of the project means the code is fully auditable. Developers can inspect what it does, where data flows, and what it sends to external services. That is materially different from using a closed ByteDance consumer product.

But ByteDance operates under Chinese law, and for organizations in regulated industries — finance, healthcare, defense, government — the provenance of software tooling increasingly triggers formal review requirements, regardless of the code’s quality or openness.

The jurisdictional question is not hypothetical: U.S. federal agencies are already operating under guidance that treats Chinese-origin software as a category requiring scrutiny.

Advertisement

For individual developers and small teams running fully local deployments with their own LLM API keys, those concerns are less operationally pressing. For enterprise buyers evaluating DeerFlow as infrastructure, they are not.

A real tool, with limitations

The community enthusiasm is credible, but several caveats apply.

DeerFlow 2.0 is not a consumer product. Setup requires working knowledge of Docker, YAML configuration files, environment variables, and command-line tools. There is no graphical installer. For developers comfortable with that environment, the setup is described as relatively straightforward; for others, it is a meaningful barrier.

Performance when running fully local models — rather than cloud API endpoints — depends heavily on available VRAM and hardware, with context handoff between multiple specialized models a known challenge. For multi-agent tasks running several models in parallel, the resource requirements escalate quickly.

Advertisement

The project’s documentation, while improving, still has gaps for enterprise integration scenarios. There has been no independent public security audit of the sandboxed execution environment, which represents a non-trivial attack surface if exposed to untrusted inputs.

And the ecosystem, while growing fast, is weeks old. The plugin and skill library that would make DeerFlow comparably mature to established orchestration frameworks simply does not exist yet.

What does it mean for enterprises in the AI transformation age?

The deeper significance of DeerFlow 2.0 may be less about the tool itself and more about what it represents in the broader race to define autonomous AI infrastructure.

DeerFlow’s emergence as a fully capable, self-hostable, MIT-licensed agentic orchestrator adds yet another twist to the ongoing race among enterprises — and AI builders and model providers themselves — to turn generative AI models into more than chatbots, but something more like full or at least part-time employees, capable of both communications and reliable actions.

Advertisement

In a sense, it marks the natural next wave after OpenClaw: whereas that open source tool sought to great a dependable, always on autonomous AI agent the user could message, DeerFlow is designed to allow a user to deploy a fleet of them and keep track of them, all within the same system.

The decision to implement it in your enterprise hinges on whether your organization’s workload demands “long-horizon” execution—complex, multi-step tasks spanning minutes to hours that involve deep research, coding, and synthesis. Unlike a standard LLM interface, this “SuperAgent” harness decomposes broad prompts into parallel sub-tasks performed by specialized experts. This architecture is specifically designed for high-context workflows where a single-pass response is insufficient and where “vibe coding” or real-time file manipulation in a secure environment is necessary.

The primary condition for use is the technical readiness of an organization’s hardware and sandbox environment. Because each task runs within an isolated Docker container with its own filesystem, shell, and browser, DeerFlow acts as a “computer-in-a-box” for the agent. This makes it ideal for data-intensive workloads or software engineering tasks where an agent must execute and debug code safely without contaminating the host system. However, this “batteries-included” runtime places a significant burden on the infrastructure layer; decision-makers must ensure they have the GPU clusters and VRAM capacity to support multi-agent fleets running in parallel, as the framework’s resource requirements escalate quickly during complex tasks.

Strategic adoption is often a calculation between the overhead of seat-based SaaS subscriptions and the control of self-hosted open-source deployments. The MIT License positions DeerFlow 2.0 as a highly capable, royalty-free alternative to proprietary agent platforms, potentially functioning as a cost ceiling for the entire category. Enterprises should favor adoption if they prioritize data sovereignty and auditability, as the framework is model-agnostic and supports fully local execution with models like DeepSeek or Kimi. If the goal is to commoditize a digital workforce while maintaining total ownership of the tech stack, the framework provides a compelling, if technically demanding, benchmark.

Advertisement

Ultimately, the decision to deploy must be weighed against the inherent risks of an autonomous execution environment and its jurisdictional provenance. While sandboxing provides isolation, the ability of agents to execute bash commands creates a non-trivial attack surface that requires rigorous security governance and auditability. Furthermore, because the project is a ByteDance-led initiative via Volcengine and BytePlus, organizations in regulated sectors must reconcile its technical performance with emerging software-origin standards. Deployment is most appropriate for teams comfortable with a CLI-first, Docker-heavy setup who are ready to trade the convenience of a consumer product for a sophisticated and extensible SuperAgent harness.

Source link

Continue Reading

Tech

I Only Listened to AI Music for a Week. It Was Terrible, but Not for the Reason You Think

Published

on

Music is my constant companion. I’m almost always listening to a carefully curated playlist or new album. I wholeheartedly believe Spotify Wrapped Day should be a national holiday. So, as an AI reporter who has watched the so-called AI music industry grow over the past few years, I decided it was finally time to see how these artificial artists stack up. So I set a challenge for myself: I would only listen to AI-created music for a full week. 

It was a very, very long week. AI music really takes the “art” out of artificial. But it was an educational and revealing experience, too. 

The story of AI music is an old record that’s been played before. Musicians have debated the role of technology in music creation for hundreds of years, from the introduction of recorded music using phonographs to synthesizers, autotune and production tech going mainstream. What makes this moment unique is that AI can create entire songs with very little human guidance. But the AI models that do so are built using music created by actual humans, creating a haze of legal woes and ethical chaos — similar to that faced by other creators like writers, artists and filmmakers.

Advertisement

Music is one of the few universal cultural touchstones we have. Generative AI is rapidly changing how music is created, and in effect, changing our humanity with it.

A week of AI music

For the purpose of my self-imposed experiment, I only listened to songs that were verifiably altered by AI. I was pleased to see that the AI music sites offered a wide range of songs, but that initial excitement was short-lived. Most disappointingly, the vast majority of the pop music was shrill and squeaky — the musical version of plastic, in my opinion. 

A lot of the trending songs were electronic music, which I’m sure EDM fans would’ve appreciated more than me. It just reminded me of a canon event every young person experiences: Being stuck at a house party where the person on the aux is “an aspiring DJ.” The house and techno styles just reinforced the idea that I was listening to robotic AI music. It made it hard to enjoy when I knew there wasn’t even the illusion of human creation behind the songs.

I fared much better with country and folk music, which had a big focus on the instrumentals and an acoustic sound. A lot of it sounded like it could’ve been by Noah Kahan, Kacey Musgraves or Luke Combs. This is where I started to relax into my typical music habits — getting hooked by a particularly appealing song on a first listen, adding those interesting songs to a playlist that I would eventually prefer over exploring new music as I grew more comfortable and attached to my favorite songs. 

Advertisement

Then there was the truly weird, wacky AI music. Beyond Suno, there is an entire universe of unique AI music on sites like YouTube. My favorite (or the least worst one?) was the 8-minute Game of Thrones disco, complete with a music video, while my editor favored the Lord of the Rings version. I found the songs engrossing, probably because they’re music videos, not just songs, with haunting, AI slop visuals.

Game of Thrones white walker on an orange disco floor.

I have no idea what’s going on in this Game of Thrones music video, where white walkers dance like it’s the 1970s, but it was something.

WickedAI/Screenshot by CNET

Tech and music: A song that’s been played before

Technology has always played a role in music. Musical AI is part of a longer arc in music’s history, Mark Ethier, founder of the iZoptope music tech company and executive director of Berklee’s Emerging Artistic Technology Lab, told me.

Advertisement

“When GarageBand came out, people felt like, ‘Oh my gosh, I can make music because I can drag some samples of a guitar, have a bass and some drums, and I’ve made a song, right?’” said Ethier. “Where we are today is the most extreme version of that.” 

AI Atlas

Traditional music software, such as GarageBand, was meant to enhance and democratize the process of creating music. AI music companies say they do the same, but there’s a big difference: You can pop out entire AI songs with just a sentence or two to guide the vibe. The underlying tech is similar to what is running in chatbots and image generators — transformers and diffusion methods, Suno cofounder Mikey Shulman said in 2023.

AI music generators like Suno do more than piecing together a song or tweaking a template. Like with imagery and videos, AI has made it quicker, cheaper and easier than ever to create something that feels like it was professionally produced.

“[AI] has changed is just how much easier it is to do, and how indistinguishable the output is,” Ethier said. Before AI, throwing some loops together on GarageBand wouldn’t be enough to make a full song or hit record. “Now, that distinction is not as clear anymore,” he said.

Advertisement

The AI music arena has grown quickly in a short period of time. Sites like Suno and Udio have racked up subscribers and gained notoriety. Suno reached a milestone of 2 million paying subscribers, its cofounder shared in February. But like other creative AI companies, Suno and Udio have been sued by record labels alleging the AI companies used musicians’ work for AI training without permission or compensation. 

Read More: AI Slop Is Destroying the Internet. These Are the People Fighting to Save It

Can we make connections with AI music?

The amount of time I spent listening to music dropped significantly on the days when I was restricted to only AI music, and I felt that deprivation deeply. It wasn’t until I came across a specific category of AI music that I began to border on enjoying the experience. There’s a neuroscientific and psychological reason why, I learned.

Joy Allen, a music therapist and director of Berklee’s Music and Health Institute, told me that there’s a reason music from our teen years sticks so strongly with us. Our adolescent brains are sponges, and music is one of the only things that activates every part of our brain, Allen said. Those connections, fueled by teenage hormones and neurochemicals, stay with us long after.

Advertisement

“When you listen to music, it’s not just activating the auditory cortex. It’s activating where you process emotions [and] physical responses … Our brains love patterns,” Allen said. “If you think about music, it’s patterns, it’s chordal structures, it’s the melody line… so we get used to patterns and predictability.”

My teen years were largely set to the soundtrack of Taylor Swift, and anyone who’s met me knows she’s still my favorite artist. But even knowing what Allen told me, I was surprised at how emotional the AI covers of Taylor Swift songs made me. 

A lot of the AI covers I listened to took Swift’s songs and reimagined them in different genres. An AI pop punk version of “You Belong With Me” sounded like it could’ve been sung by another band from my teen years, 5 Seconds of Summer. It was strangely gratifying, with a heavy dose of nostalgia. It was also the only AI song to get stuck in my head.

Advertisement
Taylor Swift at the Eras Tour - TTPD era

Nothing like Taylor Swift for a good dose of nostalgia.

Katie Collins/CNET

We can make emotional attachments to any music — created by humans or AI, theoretically, Allen said — during this time. But since my musical identity is already formed, the AI songs that brought out the more visceral, emotional reaction in me were those that drew on those connections and memories, firing those neurochemicals in my brain. I was more engaged and happier listening to these AI Swiftie covers than any other AI song. The songs were different, but they were still the lyrics I had sung into my hairbrush as a kid and in a million other scenarios throughout my life, brought to life in a new way.

While these songs were the highlight of my experiment, they didn’t sell me on AI music any more than the “original” songs did. The AI largely reminded me of the covers I had listened to in real life and seen clips of online. I liked the AI folk cover of Swift’s “All Too Well,” but it was a cheap imitation compared to the guitarist I heard sing it in a coffee shop last year, or the indie bands adding their own individual touches that I come across on TikTok.

The power of a great artist is their ability to create music that inspires others, to move them and spark flames of creativity. Covers by human musicians are a way to pay tribute and express appreciation; AI covers felt like cheap imitations and mockery by comparison. 

Advertisement

Music is human

I was irritatingly cognizant of my experiment while I was doing it. The AI music never held my attention the same way that human music did. With a few notable exceptions, the AI songs were basically white noise. I often caught myself drifting toward the Spotify app to turn on better music. In the final days of my experiment, no music was better than AI music. Even now as I write this, the car horns and bird chirps outside my window are better company than fake instruments. 

AI has become a part of our lives, for better or worse. But it’s not just part of our technology; it’s slowly infiltrating our culture. Music is one of the strongest cultural touchstones we have, and to have AI so quickly and effectively mimic something that is inherently human is… awe-inspiring. Worrisome. But definitely a very clear sign that AI is remaking the very things that define our humanity. It left me with an increasingly deep sense of dread about the havoc AI is wreaking on our culture and humanity.

It’s not just listeners like me who are struggling — musicians are, too. AI-generated music is flooding streaming platforms, leaving companies like Apple Music and Spotify struggling to define what’s allowed, what isn’t and what’s monetizable. It’s even more complex from a legal and ethical point of view.

“As a musician, this is a really complicated time to be understanding tools,” Ethier said. “You used to be able to pick up a trumpet and play trumpet. You didn’t have to think about how that trumpet was trained, or if the trumpet owns your music.”

Advertisement

Music is intrinsically human and social by design. So it wasn’t surprising that I felt disconnected throughout my AI music week. It was an isolating experience — no memories tied to core moments, no TikTok dances, no culture. No artist personality, little fandom. No thoughts of “remember how she jumped an octave when she performed it live?” It was a superficial listening experience. I didn’t want to revisit them once my experiment was done.

So much of the music we listen to is tied to specific memories. The AI songs I felt most connected to were covers of songs I already had a strong emotional connection with: Taylor Swift songs I listened to for the first time at eight years old in the backseat with my childhood besties; songs that were inspired by but utterly lacking the emotion of the ’90s power ballad my dad loves but my mom bemoans every time he plays it; a “Stick Season” AI wannabe that lacks Noah Kahan’s signature “dance while the world burns” flavor.

Music scores so many of our moments of life, from big moments like a married couple’s first dance to the small moments that flow by without us noticing. All of that builds up over our lives. Removing the humanity — or worse, trying to mimic it — sucks the soul out of what makes music worthwhile.

So, no, I would not recommend listening to only AI-generated music for a week. But it was useful, if only to further refine my worries about the way AI is eroding our humanity.

Advertisement

Source link

Continue Reading

Tech

Cauldron Ferm has turned microbes into nonstop assembly lines

Published

on

Cauldron Ferm has an unlikely origin story, as startups go. Its core technology can be traced back to the 1960s, or maybe the 1970s. The exact start is a bit hazy, actually. What is known is that David and Polly McLennan had a dream of feeding the world using protein grown from microbes.

The pair knew they needed to improve the process, which was pricy and time consuming. Most fermentation happens in batches. Picture a brewery or a vineyard. Ingredients go in and the microbes work for a while, but then the process stops when it’s time to take out the finished product. It works for alcohol because booze commands a premium price. Food, though? That needs to be cheaper.

Still, the McLennans stuck with it, starting a small business that would over the course of 40 years refine their approach to continuous fermentation, which turns microbes into assembly lines capable of cranking out products uninterrupted.

“We didn’t know what we had,” Michele Stansfied, co-founder and CEO of Cauldron Ferm, told TechCrunch. But eventually, Stansfield who arrived at the McLennans’ company in 2012, realized they had more than initially thought.

Advertisement

“We didn’t understand the challenge of continuous fermentation for synthetic biology,” Stansfield said. But when she did, she sought to transform the company from a small fee-for-service operators to a fast-moving startup. “At that point, I raised a seed round and acquired the IP, physical, and business assets.”

Cauldron has now raised $13.25 million in a Series A2 round that was led by Main Sequence Ventures with participation from Horizons Ventures, NGS Super, and SOSV, the company exclusively told TechCrunch. It had previously raised $6.5 million in 2024. Cauldron plans to use the funding to “increase the technology moat,” Stansfield said. 

The company calls it’s technology “hyper fermentation,” which helps keep microbes in their maximally productive state. It can work in existing batch fermenters with a few modifications to the facility to accommodate the process. Cauldron’s customers bring their own microbes and strains, and the startup works to tweak their growing conditions, including nutrients, to keep them humming.

Techcrunch event

Advertisement

San Francisco, CA
|
October 13-15, 2026

Currently, Cauldron is focused on producing fats and proteins, including whey protein, “a product that can just slip into supply chains,” Stansfield said, though she adds there are more products the company has its eyes on.

Advertisement

“Sixty percent of all inputs to global economy can be produced from biology,” she said. “Food was where we started, but now we’re starting to really diversify.”

Source link

Continue Reading

Tech

Jury struggles to reach verdict in social media addiction trial against Meta and YouTube

Published

on


Jurors did not say whether the holdout relates to Meta or YouTube, but Kuhl told them to keep deliberating and warned that if they cannot reach a verdict, that part of the case will have to be retried before a new jury.
Read Entire Article
Source link

Continue Reading

Tech

Dutch Ministry of Finance discloses breach affecting employees

Published

on

Netherlands Dutch Ministry of Finance

The Dutch Ministry of Finance confirmed on Monday that some of its systems were breached in a cyberattack detected last week.

Officials said the ministry was notified by a third party of the breach on March 19, and it’s still investigating the cyberattack. An ongoing investigation found that the incident affects some employees.

“The Ministry of Finance’s ICT security detected unauthorized access to systems for a number of primary processes within the policy department on Thursday, March 19,” an official statement revealed.

“Following the alert, an immediate investigation was launched, and access to these systems has been blocked as of today. This affects the work of a portion of the employees.”

Advertisement

The ministry added that the cyberattack did not impact systems used to manage tax collection, import/export regulations, and income-linked subsidies, which handle over 9.5 million tax returns annually for income tax alone.

“Services to citizens and businesses provided by the Tax and Customs Administration, Customs, and Benefits have not been affected. We will update this message when we can share more information.”

Although the ministry said the breach affected some of its employees, it didn’t disclose how many were affected or whether the attackers stole any sensitive data. Also, no cybercrime group or threat actors have taken responsibility for the attack.

BleepingComputer reached out to a Ministry of Finance spokesperson with questions about the incident, including the total number of impacted employees and how long the attackers had access to the compromised systems, but a response was not immediately available.

Advertisement

In September 2024, the Dutch national police (Politie) was also breached in a cyberattack believed to be orchestrated by a “state actor” that stole work-related contact details of multiple police officers.

More recently, in February, Dutch authorities arrested a 40-year-old man for an extortion attempt after he downloaded confidential documents mistakenly shared by the police and refused to delete them unless he received “something in return.”

Malware is getting smarter. The Red Report 2026 reveals how new threats use math to detect sandboxes and hide in plain sight.

Download our analysis of 1.1 million malicious samples to uncover the top 10 techniques and see if your security stack is blinded.

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025