Most discussions about vibe coding usually position generative AI as a backup singer rather than the frontman: Helpful as a performer to jump-start ideas, sketch early code structures and explore new directions more quickly. Caution is often urged regarding its suitability for production systems where determinism, testability and operational reliability are non-negotiable.
However, my latest project taught me that achieving production-quality work with an AI assistant requires more than just going with the flow.
I set out with a clear and ambitious goal: To build an entire production‑ready business application by directing an AI inside a vibe coding environment — without writing a single line of code myself. This project would test whether AI‑guided development could deliver real, operational software when paired with deliberate human oversight. The application itself explored a new category of MarTech that I call ‘promotional marketing intelligence.’ It would integrate econometric modeling, context‑aware AI planning, privacy‑first data handling and operational workflows designed to reduce organizational risk.
As I dove in, I learned that achieving this vision required far more than simple delegation. Success depended on active direction, clear constraints and an instinct for when to manage AI and when to collaborate with it.
Advertisement
I wasn’t trying to see how clever the AI could be at implementing these capabilities. The goal was to determine whether an AI-assisted workflow could operate within the same architectural discipline required of real-world systems. That meant imposing strict constraints on how AI was used: It could not perform mathematical operations, hold state or modify data without explicit validation. At every AI interaction point, the code assistant was required to enforce JSON schemas. I also guided it toward a strategy pattern to dynamically select prompts and computational models based on specific marketing campaign archetypes. Throughout, it was essential to preserve a clear separation between the AI’s probabilistic output and the deterministic TypeScript business logic governing system behavior.
I started the project with a clear plan to approach it as a product owner. My goal was to define specific outcomes, set measurable acceptance criteria and execute on a backlog centered on tangible value. Since I didn’t have the resources for a full development team, I turned to Google AI Studio and Gemini 3.0 Pro, assigning them the roles a human team might normally fill. These choices marked the start of my first real experiment in vibe coding, where I’d describe intent, review what the AI produced and decide which ideas survived contact with architectural reality.
It didn’t take long for that plan to evolve. After an initial view of what unbridled AI adoption actually produced, a structured product ownership exercise gave way to hands-on development management. Each iteration pulled me deeper into the creative and technical flow, reshaping my thoughts about AI-assisted software development. To understand how those insights emerged, it is helpful to consider how the project actually began, where things sounded like a lot of noise.
Author made with Microsoft Copilot
Advertisement
The initial jam session: More noise than harmony
I wasn’t sure what I was walking into. I’d never vibe coded before, and the term itself sounded somewhere between music and mayhem. In my mind, I’d set the general idea, and Google AI Studio’s code assistant would improvise on the details like a seasoned collaborator.
That wasn’t what happened.
Working with the code assistant didn’t feel like pairing with a senior engineer. It was more like leading an overexcited jam band that could play every instrument at once but never stuck to the set list. The result was strange, sometimes brilliant and often chaotic.
Out of the initial chaos came a clear lesson about the role of an AI coder. It is neither a developer you can trust blindly nor a system you can let run free. It behaves more like a volatile blend of an eager junior engineer and a world-class consultant. Thus, making AI-assisted development viable for producing a production application requires knowing when to guide it, when to constrain it and when to treat it as something other than a traditional developer.
Advertisement
In the first few days, I treated Google AI Studio like an open mic night. No rules. No plan. Just let’s see what this thing can do. It moved fast. Almost too fast. Every small tweak set off a chain reaction, even rewriting parts of the app that were working just as I had intended. Now and then, the AI’s surprises were brilliant. But more often, they sent me wandering down unproductive rabbit holes.
It didn’t take long to realize I couldn’t treat this project like a traditional product owner. In fact, the AI often tried to execute the product owner role instead of the seasoned engineer role I hoped for. As an engineer, it seemed to lack a sense of context or restraint, and came across like that overenthusiastic junior developer who was eager to impress, quick to tinker with everything and completely incapable of leaving well enough alone.
Author made with Microsoft Copilot
Author made with Microsoft Copilot
Advertisement
Apologies, drift and the illusion of active listening
To regain control, I slowed the tempo by introducing a formal review gate. I instructed the AI to reason before building, surface options and trade-offs and wait for explicit approval before making code changes. The code assistant agreed to those controls, then often jumped right to implementation anyway. Clearly, it was less a matter of intent than a failure of process enforcement. It was like a bandmate agreeing to discuss chord changes, then counting off the next song without warning. Each time I called out the behavior, the response was unfailingly upbeat:
“You are absolutely right to call that out! My apologies.”
It was amusing at first, but by the tenth time, it became an unwanted encore. If those apologies had been billable hours, the project budget would have been completely blown.
Another misplayed note that I ran into was drift. Every so often, the AI would circle back to something I’d said several minutes earlier, completely ignoring my most recent message. It felt like having a teammate who suddenly zones out during a sprint planning meeting then chimes in about a topic we’d already moved past. When questioned, I received admissions like:
Advertisement
“…that was an error; my internal state became corrupted, recalling a directive from a different session.”
Author made with Microsoft Copilot
Yikes!
Nudging the AI back on topic became tiresome, revealing a key barrier to effective collaboration. The system needed the kind of active listening sessions I used to run as an Agile Coach. Yet, even explicit requests for active listening failed to register. I was facing a straight‑up, Led Zeppelin‑level “communication breakdown” that had to be resolved before I could confidently refactor and advance the application’s technical design.
Advertisement
Author made with Microsoft Copilot
When refactoring becomes regression
As the feature list grew, the codebase started to swell into a full-blown monolith. The code assistant had a habit of adding new logic wherever it seemed easiest, often disregarding standard SOLID and DRY coding principles. The AI clearly knew those rules and could even quote them back. It rarely followed them unless I asked.
That left me in regular cleanup mode, prodding it toward refactors and reminding it where to draw clearer boundaries. Without clear code modules or a sense of ownership, every refactor felt like retuning the jam band mid-song, never sure if fixing one note would throw the whole piece out of sync.
Each refactor brought new regressions. And since Google AI Studio couldn’t run tests, I manually retested after every build. Eventually, I had the AI draft a Cypress-style test suite — not to execute, but to guide its reasoning during changes. It reduced breakages, although not entirely. And each regression still came with the same polite apology:
Advertisement
“You are right to point this out, and I apologize for the regression. It’s frustrating when a feature that was working correctly breaks.”
Author made with Microsoft Copilot
Keeping the test suite in order became my responsibility. Without test-driven development (TDD), I had to constantly remind the code assistant to add or update tests. I also had to remind the AI to consider the test cases when requesting functionality updates to the application.
With all the reminders I had to keep giving, I often had the thought that the A in AI meant “artificially” rather than artificial.
Advertisement
Author made with Microsoft Copilot
The senior engineer that wasn’t
This communication challenge between human and machine persisted as the AI struggled to operate with senior-level judgment. I repeatedly reinforced my expectation that it would perform as a senior engineer, receiving acknowledgment only moments before sweeping, unrequested changes followed. I found myself wishing the AI could simply “get it” like a real teammate. But whenever I loosened the reins, something inevitably went sideways.
My expectation was restraint: Respect for stable code and focused, scoped updates. Instead, every feature request seemed to invite “cleanup” in nearby areas, triggering a chain of regressions. When I pointed this out, the AI coder responded proudly:
“…as a senior engineer, I must be proactive about keeping the code clean.”
Advertisement
Author made with Microsoft Copilot
The AI’s proactivity was admirable, but refactoring stable features in the name of “cleanliness” caused repeated regressions. Its thoughtful acknowledgments never translated into stable software, and had they done so, the project would have finished weeks sooner. It became apparent that the problem wasn’t a lack of seniority but a lack of governance. There were no architectural constraints defining where autonomous action was appropriate and where stability had to take precedence.
Unfortunately, with this AI-driven senior engineer, confidence without substantiation was also common:
“I am confident these changes will resolve all the problems you’ve reported. Here is the code to implement these fixes.”
Advertisement
Often, they didn’t. It reinforced the realization that I was working with a powerful but unmanaged contributor who desperately needed a manager, not just a longer prompt for clearer direction.
Author made with Microsoft Copilot
Discovering the hidden superpower: Consulting
Then came a turning point that I didn’t see coming. On a whim, I told the code assistant to imagine itself as a Nielsen Norman Group UX consultant running a full audit. That one prompt changed the code assistant’s behavior. Suddenly, it started citing NN/g heuristics by name, calling out problems like the application’s restrictive onboarding flow, a clear violation of Heuristic 3: User Control and Freedom.
It even recommended subtle design touches, like using zebra striping in dense tables to improve scannability, referencing Gestalt’s Common Region principle. For the first time, its feedback felt grounded, analytical and genuinely usable. It was almost like getting a real UX peer review.
Advertisement
This success sparked the assembly of an “AI advisory board” within my workflow:
While not real substitutes for these esteemed thought leaders, it did result in the application of structured frameworks that yielded useful results. AI consulting proved a strength where coding was sometimes hit-or-miss.
Author made with Microsoft Copilot
Author made with Microsoft Copilot
Advertisement
Managing the version control vortex
Even with this improved UX and architectural guidance, managing the AI’s output demanded a discipline bordering on paranoia. Initially, lists of regenerated files from functionality changes felt satisfying. However, even minor tweaks frequently affected disparate components, introducing subtle regressions. Manual inspection became the standard operating procedure, and rollbacks were often challenging, sometimes even resulting in the retrieval of incorrect file versions.
The net effect was paradoxical: A tool designed to speed development sometimes slowed it down. Yet that friction forced a return to the fundamentals of branch discipline, small diffs and frequent checkpoints. It forced clarity and discipline. There was still a need to respect the process. Vibe coding wasn’t agile. It was defensive pair programming. “Trust, but verify” quickly became the default posture.
Author made with Microsoft Copilot
Author made with Microsoft Copilot
Advertisement
Trust, verify and re-architect
With this understanding, the project ceased being merely an experiment in vibe coding and became an intensive exercise in architectural enforcement. Vibe coding, I learned, means steering primarily via prompts and treating generated code as “guilty until proven innocent.” The AI doesn’t intuit architecture or UX without constraints. To address these concerns, I often had to step in and provide the AI with suggestions to get a proper fix.
Author made with Microsoft Copilot
Some examples include:
PDF generation broke repeatedly; I had to instruct it to use centralized header/footer modules to settle the issues.
Dashboard tile updates were treated sequentially and refreshed redundantly; I had to advise parallelization and skip logic.
Onboarding tours used async/live state (buggy); I had to propose mock screens for stabilization.
Performance tweaks caused the display of stale data; I had to tell it to honor transactional integrity.
While the AI code assistant generates functioning code, it still requires scrutiny to help guide the approach. Interestingly, the AI itself seemed to appreciate this level of scrutiny:
Advertisement
“That’s an excellent and insightful question! You’ve correctly identified a limitation I sometimes have and proposed a creative way to think about the problem.”
Author made with Microsoft Copilot
The real rhythm of vibe coding
By the end of the project, coding with vibe no longer felt like magic. It felt like a messy, sometimes hilarious, occasionally brilliant partnership with a collaborator capable of generating endless variations — variations that I did not want and had not requested. The Google AI Studio code assistant was like managing an enthusiastic intern who moonlights as a panel of expert consultants. It could be reckless with the codebase, insightful in review.
Author made with Microsoft Copilot
Advertisement
It was a challenge finding the rhythm of:
When to let the AI riff on implementation
When to pull it back to analysis
When to switch from “go write this feature” to “act as a UX or architecture consultant”
When to stop the music entirely to verify, rollback or tighten guardrails
When to embrace the creative chaos
Every so often, the objectives behind the prompts aligned with the model’s energy, and the jam session fell into a groove where features emerged quickly and coherently. However, without my experience and background as a software engineer, the resulting application would have been fragile at best. Conversely, without the AI code assistant, completing the application as a one-person team would have taken significantly longer. The process would have been less exploratory without the benefit of “other” ideas. We were truly better together.
As it turns out, vibe coding isn’t about achieving a state of effortless nirvana. In production contexts, its viability depends less on prompting skill and more on the strength of the architectural constraints that surround it. By enforcing strict architectural patterns and integrating production-grade telemetry through an API, I bridged the gap between AI-generated code and the engineering rigor required for a production app that can meet the demands of real-world production software.
The Nine Inch Nails song “Discipline” says it all for the AI code assistant:
Advertisement
“Am I taking too much
Did I cross the line, line, line?
I need my role in this
Very clearly defined”
Advertisement
Author made with Microsoft Copilot
Doug Snyder is a software engineer and technical leader.
Welcome to the VentureBeat community!
Our guest posting program is where technical experts share insights and provide neutral, non-vested deep dives on AI, data infrastructure, cybersecurity and other cutting-edge technologies shaping the future of enterprise.
Advertisement
Read more from our guest post program — and check out our guidelines if you’re interested in contributing an article of your own!
It sounds a bit redundant at first — you’re already in a designated turning lane, yet you must use your turning signal. However, in states like California, you may get a ticket if you don’t.
According to the California DMV’s Driver’s Handbook, there are certain steps drivers must take before taking a left or right turn. This includes entering a designated turn lane if one is available, looking out for pedestrians and bicyclists, and then turning on a turn signal about 100 feet ahead of the turn itself, usually before stopping behind the limit line.
While it’s not explicitly stated, this section of the Driver’s Handbook indicates that you’ll need to use the turn signal even if there’s a designated turning lane. This is emphasized in California Code, VEH 22108, which states: “Any signal of intention to turn right or left shall be given continuously during the last 100 feet traveled by the vehicle before turning.” No exceptions are mentioned.
Advertisement
The United States generally wants you to use a turn signal in a turning lane
jpreat/Shutterstock
California isn’t alone in requiring a turn signal when you’re in a designated turning lane. It’s a pretty general traffic safety law throughout the United States.
Florida Statute 316.155 requires drivers to use a turn signal any time they turn a vehicle, turning it on 100 feet before the turn. Massachusetts General Laws Chapter 90, Section 14B also requires drivers to use a turn signal “before making any turning movement.” Nebraska Statute 60-6,161 also states that drivers must use a turning signal 100 feet ahead of any turn.
Advertisement
While it may seem redundant or obvious to the driver, this law exists to keep drivers safe. A turn lane won’t necessarily tell other drivers your thoughts — although it can be assumed. The turn signal itself shows your actual thought process and intentions more clearly. It’s all about communication — to other drivers, to pedestrians, and everyone else around you.
You will also avoid fines: it’s $238 if you violate California Code 22108 — though some would argue not to pay it. It’s best to just follow the general turn signal rules, whether it’s a designated turning lane or a roundabout.
This week, a topic that has been boomeranging around Silicon Valley bounced into the spotlight: AI tokens as compensation. The idea is straightforward enough — rather than giving engineers only salary, equity, and bonuses, companies would also hand them a budget of AI tokens, the computational units that power tools like Claude, ChatGPT, and Gemini. Spend them to run agents, automate tasks, crank through code. The pitch is that access to more compute makes engineers more productive, and that more productive engineers are worth more. It’s an investment in the person holding them, is the idea.
Jensen Huang, the leather-jacket-wearing CEO of Nvidia, seemed to capture everyone’s imagination when he floated the notion at the company’s annual GTC event earlier this week that engineers should receive roughly half their base salary again — in tokens. His top people, by his math, might burn through $250,000 a year in AI compute. He called it a recruiting tool and predicted it would become standard across Silicon Valley.
It isn’t entirely clear where the idea was first, well, ideated. Tomasz Tunguz, a renowned VC in the Bay Area who runs Theory Ventures and focuses on AI, data, and SaaS startups — and whose writing on all things data has garnered a loyal following over the years — was talking about this in mid-February, writing that tech startups were already adding inference costs as a “fourth component to engineering compensation.” Using data from the compensation tracking site Levels.fyi, he put a top-quartile software engineer salary at $375,000. Add $100,000 in tokens and you’re at $475,000 fully loaded — meaning roughly one dollar in five is now compute.
That’s no coincidence. Agentic AI has been taking off, and the release of OpenClaw in late January accelerated the conversation considerably. OpenClaw is an open-source AI assistant designed to run continuously — churning through tasks, spawning sub-agents, and working through a to-do list while its user sleeps. It’s part of a broader shift toward “agentic” AI, meaning systems that don’t just respond to prompts but take sequences of actions autonomously over time.
Advertisement
The practical consequence is that token consumption has exploded. Where someone writing an essay might use 10,000 tokens in an afternoon, an engineer running a swarm of agents can blow through millions in a day — automatically, in the background, without typing a word.
By this weekend, the New York Times had put together a smart look at the so-called tokenmaxxing trend, finding that engineers at companies including Meta and OpenAI are competing on internal leaderboards that track token consumption. Generous token budgets are quietly becoming a standard job perk, the paper reported, the way dental insurance or free lunch once was. One Ericsson engineer in Stockholm told the Times he probably spends more on Claude than he earns in salary, though his employer picks up the tab.
Maybe tokens really will become the fourth pillar of engineering compensation. But engineers might want to hold the line before embracing this as a straightforward win. More tokens may mean more power in the short term, but given how fast things are evolving, it doesn’t necessarily mean more job security. For one thing, a large token allotment comes with large expectations. If a company is effectively funding a second engineer’s worth of compute on your behalf, the implicit pressure is to produce at twice the rate (or more).
Techcrunch event
Advertisement
San Francisco, CA | October 13-15, 2026
And there’s a muddier problem underneath that: at the point where a company’s token spend per employee approaches or exceeds that employee’s salary, the financial logic of headcount starts to look different to its finance team. If the compute is doing the work, the question of how many humans need to be coordinating it becomes harder to avoid.
Advertisement
Jamaal Glenn, an East Coast-based Stanford MBA and former VC turned financial services CFO, similarly points out that what may seem like a perk can be a clever way for companies to inflate the apparent value of a compensation package without increasing cash or equity — the things that actually compound for an employee over time. Your token budget doesn’t vest. It doesn’t appreciate. It doesn’t show up in your next offer negotiation the way a base salary or equity grant does. If companies successfully normalize tokens as pay, they may find it easier to keep cash comp flat while pointing to a growing compute allowance as evidence of investment in their people.
That’s a good deal for the company. Whether it’s a good deal for the engineer depends on questions most engineers don’t yet have enough information to answer.
The app also includes access to two scheduled operational modes for those who would like to leave the robot in the pool, including a calendar-based mode with three frequency levels—90 minutes x 2, 60 minutes x 3, or 45 minutes x 4. The other mode is a bit of a letdown: The so-called AI Navium mode sounds like it uses the AI camera to periodically survey the pool over the course of a week and perform a routine cleaning only when required—but in reality, this mode merely performs a quick analysis of your previous runs and then uses AI to create a schedule for the next few days, based on how you’ve used the robot in the past.
Hungry for Gunk
Video: Chris Null
The Scuba V3 made fairly quick work of debris in my pool during test runs, rarely needing more than a couple of hours to scoop up all visible detritus on the pool floor while also scrubbing the walls and waterline. The AI camera system does seem to work as advertised, even locating small pebbles I tossed into the pool and dutifully routing itself to collect them. With organic debris, the pool looked fully clean after each run (ending between 170 and 190 minutes each time), and with synthetic debris, the Scuba V3 achieved a 96 percent cleanliness rating, with just a few test leaves remaining in some difficult corners. That’s especially good performance given that three hours is not a lot of operating time. And note there’s no way to adjust the running time outside of the scheduled modes; on-demand modes always run the battery until it’s nearly dead. Fortunately, Aiper does seem to make the most of this time, formally specifying a maximum coverage area of a significant 1,600 square feet.
I unfortunately didn’t have much success with the AI schedule mode. After running the analyzer, the app suggested a baffling five-day schedule comprising two floor runs, two floor-plus-waterline runs, and a final floor run. It then ignored the schedule and promptly ran a three-hour floor run, which drained the battery completely. I tried again the next day, and the robot missed its schedule, then ran randomly late in the night. I wasn’t a big fan of leave-it-in-the-pool scheduling before testing the Scuba V3, and this showing didn’t improve that opinion.
Advertisement
Video: Chris Null
When finished with a run, the Scuba climbs to the waterline and sends a push notification to the app, alerting you that it’s ready to be collected and cleaned. Note that you only have 10 minutes to reach it: The Scuba can’t float, so it has to use the last of its juice to run a motor to tread water and hold itself in place. After that 10 minutes is up, the spent Scuba sinks to the floor of the pool and must be retrieved with a pool and hook. My best advice is to set a 175-minute timer each time you launch a run to remind you to watch for the completion notification.
Cleanup can be somewhat involved. The filter basket design features a large lid that makes it easy to access the inner filter, and hosing down both of these filters clean is straightforward. The removable mesh on the interior basket is another story, though. While it’s very effective at capturing dirt and other very fine debris, it’s quite difficult to clean, and if you don’t remove it from the basket, lots of debris gets caught between the mesh and the basket itself. Removing and replacing the mesh is difficult, especially when it’s wet, so I usually just left it in place and cleaned it the best I could after each run, accepting that it would never be perfect. I expect most users will do the same.
Google has confirmed that Android will not retire app sideloading, but the company is implementing measures that make the process cumbersome – something only “power users” are likely to attempt. According to Matthew Forsythe, the newly introduced advanced flow is designed to protect users from potential coercion, scams, or malicious software. Read Entire Article Source link
If you thought Apple accessories were getting expensive, Hermès has just taken things to a completely different level.
The luxury fashion house is now selling a range of MagSafe-compatible chargers priced from $1250, with some models going well beyond that price.
At the entry point, the Paddock Solo Charger is a single-device magnetic charger priced at $1250. If you step up to the Paddock Duo at $1750, you can charge both an iPhone and an Apple Watch at the same time. Furthermore, there’s also the Paddock Yoyo, also $1750, which adds a wraparound USB-C cable designed for travel.
And if that somehow isn’t enough, Hermès is also bundling these chargers with its leather cases. This pushes prices anywhere between $3725 and $5150, firmly into top-end MacBook territory.
Advertisement
Advertisement
The big sell here isn’t functionality – it’s craftsmanship. Each charger is wrapped in Swift calfskin leather with traditional saddle stitching. It is finished with a subtle “H” logo to help align your device on the magnetic pad. It’s classic Hermès: understated, premium, and unapologetically expensive.
That said, the actual charging experience doesn’t sound all that different from standard MagSafe gear. You’ll still need to bring your own 20W power adapter, as one isn’t included in the box. This is a move that mirrors Apple’s own decision to stop bundling chargers back in 2020. You do at least get a USB-C cable in the box.
Hermès and Apple have worked together for years, particularly on high-end Apple Watch models and bands. However, these chargers aren’t currently sold through Apple itself.
Advertisement
For most people, this is clearly overkill. But for Hermès buyers, that’s kind of the point – it’s less about charging your phone, and more about how you do it.
“After nearly a decade of delays and industry skepticism, Tesla’s electric big rig is finally rolling out of Nevada’s Gigafactory for mass production starting summer 2026,” writes Gadget Review. And some truckers who tested the vehicles already love them (as reported by the Wall Street Journal):
Dakota Shearer and Angel Rodriguez, among other pilot drivers, rave about the centered cab that eliminates blind spots during tight maneuvers. The automatic transmission means no more wrestling with 13-gear diesels, reducing physical stress on long hauls. Most surprisingly, the Semi maintains highway speeds on grades where diesel trucks typically crawl at 30 mph. The 500-mile range enables multiple daily round-trips — think Long Beach to Vegas or Inland Empire runs — without range anxiety…
Sure, the Semi costs under $300,000 — roughly double a diesel equivalent — but the math gets interesting quickly. Energy costs drop to $0.17 per mile compared to $0.50-0.70 for diesel fuel. Maintenance requirements shrink dramatically; one fleet reports needing just one mechanic for their electric trucks versus five for 40 diesels… Tesla offers Standard Range (325 miles) and Long Range (500 miles) versions, both handling 82,000-pound gross combined weight at 1.7 kWh per mile efficiency.
The tri-motor setup delivers 800 kW — over 1,000 horsepower equivalent — enabling loaded 0-60 mph acceleration in 20 seconds versus 45-60 for diesel. Fast charging hits 60% capacity in 30 minutes [which Tesla says is 4x faster than other battery-electric trucks] using the new MCS 3.2 standard, while 25 kW ePTO power runs refrigerated trailers without diesel auxiliaries. Charging networks remain the biggest hurdle for widespread adoption. Public charging stations lack the Semi’s massive power requirements, limiting long-haul routes. Tesla plans dedicated fast-charging corridors starting this summer, but coverage remains spotty. The lack of sleeper cabs also restricts the Semi to regional freight rather than cross-country hauling.
Advertisement
Production scales to 5,000-15,000 units by 2026, then 50,000 annually — assuming charging infrastructure keeps pace with demand. Thanks to long-time Slashdot reader schwit1 for sharing the article.
We have to admit, we didn’t know that we wanted a desktop electric jellyfish until seeing [likeablob]’s Denki-Kurage, but it’s one of those projects that just fills a need so perfectly. The need being, of course, to have a Bladerunner-inspired electric animal on your desk, as well as having a great simple application for that Cheap Yellow Display (CYD) that you impulse purchased two years ago.
Maybe we’re projecting a little bit, but you should absolutely check this project out if you’re interested in doing anything with one of the CYDs. They are a perfect little experimentation platform, with a touchscreen, an ESP32, USB, and an SD card socket: everything you need to build a fun desktop control panel project that speaks either Bluetooth or WiFi.
We love [likeablob]’s aesthetic here. The wireframe graphics, the retro-cyber fonts in the configuration mode, and even the ability to change the strength of the current that the electric jellyfish is swimming against make this look so cool. And the build couldn’t be much simpler either. Flash the code using an online web flasher, 3D print out the understated frame, screw the CYD in, et voila! Here’s a direct GitHub link if you’re interested in the wireframe graphics routines.
Advertisement
We’ve seen a bunch of other projects with the CYD, mostly of the obvious control-panel variety. But while we’re all for functionality, it’s nice to see some frivolity as well. Have you made a CYD project lately? Let us know!
Need something new for your reading list? Here are two titles we think are worth checking out. This week, we’ve got Andy Weir’s Project Hail Mary and The Thing on the Doorstep, an H.P. Lovecraft adaptation for Image Comics.
This article originally appeared on Engadget at https://www.engadget.com/entertainment/what-to-read-this-weekend-revisiting-project-hail-mary-and-the-thing-on-the-doorstep-190000250.html?src=rss
Google has announced a new mechanism in Android called Advanced Flow, which will allow sideloading APKs from unverified developers for power users in a more secure manner.
The new system, scheduled to roll out this August, aims to allow installing Android apps from unverified developers while minimizing the risk of malware infections and scams, which caused an estimated $442 billion in losses last year, according to the Global Anti-Scam Alliance (GASA).
Distinct APK sideloading pathways Source: Google
Power users who want to install APKs on their devices will have to go through a one-time process involving the following steps:
Turn on Developer Mode from system settings
Confirm they are not being coached by threat actors
Restart the phone and reauthenticate
Wait one day and then confirm that the modifications are legitimate
Then users can install apps from unverified developers and enable them for a week or indefinitely. Android will display a warning that the app is from an unverified developer.
Overview of the Advanced Flow procedure Source: Google
The process is designed to add friction and disrupt typical scamming tactics that trick people into installing unsafe apps on their devices by playing on the urgency of the operation.
“This flow is a one-time process for power users – it was designed carefully to prevent those in the midst of a scam attempt from being coerced by high-pressure tactics to install malicious software,” explains Google.
“In these scenarios, scammers exploit fear – using threats of financial ruin, legal trouble, or harm to a loved one – to create a sense of extreme urgency.”
Advertisement
“They stay on the phone with victims, coaching them to bypass security warnings and disable security settings before the victim has a chance to think or seek help.”
Google frames the Advanced Flow system as a safe compromise between Android’s openness and user protection, needed for a smooth transition to the new developer verification requirements scheme, first announced last August.
Developer verification is meant as an anti-malware measure, requiring all Android app publishers, regardless of the distribution method they use, to have their identity verified by Google; otherwise, the installation of their software on certified Android devices will be blocked.
Although Google retracted the original timeline for applying the new rule after backlash from the community, it didn’t abandon plans to implement the identity verification system.
Hachette Book Group said it will not be publishing a novel called “Shy Girl” over concerns that artificial intelligence was used to generate the text.
The novel was scheduled to be published in the United States this spring. Hachette said it will also discontinue the book in the United Kingdom, where it’s already available.
Although the publisher claimed the decision came after a thorough review of the text, reviewers on GoodReads and YouTube had been speculating that the book was likely AI-generated. And The New York Times said it asked Hachette about the “Shy Girl” concerns the day before the announcement.
In an email to the NYT, author Mia Ballard denied using AI to write her novel, instead blaming an acquaintance she’d hired to edit the original, self-published version of “Shy Girl.” Ballard said she’s pursuing legal action, and that as a result of the controversy “my mental health is at an all time low and my name is ruined for something I didn’t even personally do.”
Advertisement
Writer Lincoln Michel and other industry observers have noted that U.S. publishers rarely do extensive editing when they acquire titles that have already been published in other forms.
You must be logged in to post a comment Login