Random numbers are very important to us in this computer age, being used for all sorts of security and cryptographic tasks. [Theory to Thing] recently built a device to generate random numbers using nothing more complicated than simple camera noise.
The heart of the build is an ESP32 microcontroller, which [Theory to Thing] first paired with a temperature sensor as a source of randomness. However, it was quickly obvious that a thermocouple in a cup of tea wasn’t going to produce nice, jittery, noisy data that would make for good random numbers. Then, inspiration struck, when looking at vision from a camera with the lens cap on. Particularly at higher temperatures, speckles of noise were visible in the blackness—thermal noise, which was just what the doctor ordered.
Thus, the ESP32 was instead hooked up to an OV3660 camera, which was then covered up with a piece of black electrical tape. By looking at the least significant bits of the pixels in the image, it was possible to pick up noise when the camera should have been reporting all black pixels. [Theory to Thing] then had the ESP32 collate the noisy data and report it via a web app that offers up randomly-generated answers to yes-or-no questions.
Advertisement
[Theory to Thing] offers up a basic statistical exploration of bias in the system, and shows how it can be mitigated to some degree, but we’d love a deeper dive into the maths to truly quantify how good this system is when it comes to randomness. We’ve featured deep dives on the topic before. Video after the break.
3 February 1967 is a day that belongs in the annals of music history. It’s the day that Jimi Hendrix entered London’s Olympic Studios to record a song using a new component. The song was “Purple Haze,” and the component was the Octavia guitar pedal, created for Hendrix by sound engineer Roger Mayer. The pedal was a key element of a complex chain of analog elements responsible for the final sound, including the acoustics of the studio room itself. When they sent the tapes for remastering in the United States, the sounds on it were so novel that they included an accompanying note explaining that the distortion at the end was not malfunction but intention. A few months later, Hendrix would deliver his legendary electric guitar performance at the Monterey International Pop Festival.
“Purple Haze” firmly established that an electric guitar can be used not just as a stringed instrument with built-in pickups for convenient sound amplification, but also as a full-blown wave synthesizer whose output can be manipulated at will. Modern guitarists can reproduce Hendrix’s chain using separate plug-ins in digital audio workstation software, but the magic often disappears when everything is buffered and quantized. I wanted to find out if a more systematic approach could do a better job and provide insights into how Hendrix created his groundbreaking sound.
My fascination with Hendrix’s Olympic Studios’ performance arose because there is a “Hendrix was an alien” narrative surrounding his musical innovation—that his music appeared more or less out of nowhere. I wanted to replace that narrative with an engineering-driven account that’s inspectable and reproducible—plots, models, and a signal chain from the guitar through the pedals that you can probe stage by stage.
Each effects pedal in Hendrix’s chain contributed to enhancing the electric guitar beyond its intrinsic limits. A selection of plots from the full-circuit analysis shows how the Fuzz Face turns a sinusoid signal from a string into an almost square wave; how the Octavia pedal inverts half the input waveform to double its frequency; how the wah-wah pedal acts as band-pass filter; and how the Uni-Vibe pedal introduces selective phase shifts to color the sound.James Provost/ Rohan S. Puranik
Although I work mostly in the digital domain as an edge-computing architect in my day job, I knew that analog circuit simulations would be the key to going deeper.
Advertisement
My first step was to look at the challenges Hendrix was trying to address. Before the 1930s, guitars were too quiet for large ensembles. Electromagnetic pickups—coils of wire wrapped around magnets that detect the vibrations of metal strings—fixed the loudness problem. But they left a new one: the envelope, which specifies how the amplitude of a note varies as it’s played on an instrument, starting with a rising initial attack, followed by a falling decay, and then any sustain of the note after that. Electric guitars attack hard, decay fast, and don’t sustain like bowed strings or organs. Early manufacturers tried to modify the electric guitar’s characteristics by using hollow bodies fitted with magnetic pickups, but the instrument still barked more than it sang.
Hendrix’s mission was to reshape both the electric guitar’s envelope and its tone until it could feel like a human voice. He tackled the guitar’s constraints by augmenting it. His solution was essentially a modular analog signal chain driven not by knobs but by hands, feet, gain staging, and physical movement in a feedback field.
Hendrix’s setups are well documented: Set lists, studio logs, and interviews with Mayer and Eddie Kramer, then the lead engineer at Olympic Studios, fill in the details. The signal chain for “Purple Haze” consisted of a set of pedals—a Fuzz Face, the Octavia, and a wah-wah—plus a Marshall 100-watt amplifier stack, with the guitar and room acoustics closing a feedback loop that Hendrix tuned with his own body. Later, Hendrix would also incorporate a Uni-Vibe pedal for many of his tracks. All the pedals were commercial models except for the Octavia, which Mayer built to produce a distorted signal an octave higher than its input.
Hendrix didn’t speak in decibels and ohm values, but he collaborated with engineers who did.
Advertisement
I obtained the schematics for each of these elements and their accepted parameter ranges, and converted them into netlists that ngspice can process (ngpsice is an open source implementation of the Spice circuit analyzer). The Fuzz Face pedal came in two variants, using germanium or silicon transistors, so I created models for both. In my models, Hendrix’s guitar pickups had a resistance of 6 kiloohms and an inductance of 2.5 henrys with a realistic cable capacitance.
I chained the circuit simulations together using a script, and I produced data-plot and sample sound outputs with Python scripts. All of the ngspice files and other scripts are available in my GitHub repository at github.com/nahorov/Hendrix-Systems-Lab, with instructions on how to reproduce my simulations.
What Does The Analysis of Hendrix’s Signal Chain Tell Us?
Plotting the signal at different points in the chain with different parameters reveals how Hendrix configured and manipulated the nonlinear complexities of the system as a whole to reach his expressive goals.
A few highlights: First, the Fuzz Face is a two-transistor feedback amplifier that turns a gentle sinusoid signal into an almost binary “fuzzy” output. The interesting behavior emerges when the guitar’s volume is reduced. Because the pedal’s input impedance is very low (about 20 kΩ), the pickups interact directly with the pedal circuit. Reducing amplitude restores a sinusoidal shape—producing the famous “cleanup effect” that was a hallmark of Hendrix’s sound, where the fuzz drops in and out as desired while he played.
Advertisement
The Jimi Hendrix Experience, (left to right) Mitch Mitchel, Jimi Hendrix, Noel ReddingFred W. McDarrah/Getty Images
Second, the Octavio pedal used a rectifier, which normally converts alternating to direct current. Mayer realized that a rectifier effectively flips each trough of a waveform into a peak, doubling the number of peaks per second. The result is an apparent doubling of frequency—a bloom of second-harmonic content that the ear hears a bright octave above the fundamental.
Third, the wah-wah pedal is a band-pass filter: Frequency plots show the center frequency sweeping from roughly 300 hertz to 2 kilohertz. Hendrix used it to make the guitar “talk” with vowel sounds, most iconically on “Voodoo Child (Slight Return).”
Fourth, the Uni-Vibe cascades four phase-shift sections controlled by photoresistors. In circuit terms, it’s a low-frequency oscillator modulating a variable-phase network; in musical terms it’s motion and air.
Finally, the whole chain became a closed loop by driving the Marshall amplifier near saturation, which among other things extends the sustain. In a reflective room, the guitar strings couple acoustically to the speakers—move a few centimeters and you shift from one stable feedback mode to another. To an engineer, this is a gain-controlled acoustic feedback system. To Hendrix, it was part of the instrument. He learned to tune oscillation with distance and angle, shaping sirens, bombs, and harmonics by walking the edge of instability.
Advertisement
Hendrix didn’t speak in decibels and ohm values, but he collaborated with engineers who did—Mayer and Kramer—and iterated fast as a systems engineer. Reframing Hendrix as an engineer doesn’t diminish the art. It explains how one person, in under four years as a bandleader, could pull the electric guitar toward its full potential by systematically augmenting the instrument’s shortcomings for maximum expression.
This article appears in the March 2026 print issue as “Jimi Hendrix, Systems Engineer.”
Since the power of a true random number generator comes from the actual unpredictability of the physical world, software equivalents are simply constrained by patterns in the algorithms they employ after the starting point is known. Hardware techniques become somewhat more intriguing, utilizing chaotic phenomena such as quantum effects or thermal movements. One project just followed the easiest route, extracting randomization directly from a camera sensor’s noise.
YouTuber “Maker Theory to Thing” started by attachin an OV3660 camera module to an ESP32 microcontroller. Then, to totally block light, they placed a sheet of black electrical tape directly over the lens. The sensor is still able to detect minute changes even in almost complete darkness. Heat builds up and begins to jostle the electrons inside each pixel as the device heats up and operates, producing a variety of noise speckles across what should be a nice, even black picture.
He began by focusing on the least important aspect of each individual pixel after obtaining the raw pixel data from a few frames. That one bit would oscillate between a zero and a one, reacting excessively to even slight variations. They gather all of those bits and combine them into streams of lovely, unpredictable data because even a single electron moving from one place to another can tip it.
When he initially tested it, the findings weren’t very promising because there was a slight imbalance, with 51 percent ones and 49 percent zeros. As it happens, some of the camera’s pixels would remain fixed on the same value from frame to frame (due to dust, a scratch, or simply a manufacturing quirk somewhere along the line). Some predictable aspects would be added to the mix by those consistent pixels. They therefore devised a straightforward method for comparing the frames and eliminating the regions that remained unchanged. The distribution of ones and zeros became considerably closer to being lovely and even after that phase.
The results are collected and streamed over the air by the ESP32. Users can access a simple online interface and ask yes-or-no queries. When you press a button, you will receive a response from the noise-derived randomness. It’s much faster because everything stays local and you don’t require the internet. The camera’s ability to quickly scan across thousands of pixels is what gives it speed.
Though the real entropy comes from sensor noise in photos of the moving blobs, similar concepts have previously been applied, such as when Cloudflare employed lava lamps. While other builders used diode avalanche effects or optical mouse sensors, this version is notable for having fewer parts. Since no additional components produce the core entropy, a cheap ESP32-CAM board gets the job done.
Under controlled conditions, thermal noise in image sensors provides some fairly consistent unpredictability. Just be aware that bias may be introduced by light leaks or abrupt temperature changes, although this won’t be a problem if the lens is covered. All of the statistical checks verify that the output acts randomly, which is ideal for sporadic or experimental purposes. [Source]
The NASA astronaut whose serious medical issue prompted the early return of SpaceX’s Crew-11 from the International Space Station (ISS) has spoken out about the incident.
In a statement posted on NASA’s website on Wednesday, NASA astronaut Mike Fincke stepped forward to say that while in orbit aboard the ISS, he experienced a medical event “that required immediate attention from my incredible crewmates,” adding that “thanks to their quick response and the guidance of our NASA flight surgeons, my status quickly stabilized.”
Fincke, 58, continued: “After further evaluation, NASA determined the safest course was an early return for Crew-11 — not an emergency, but a carefully coordinated plan to be able to take advantage of advanced medical imaging not available on the space station.”
Crew-11 arrived at the ISS in August 2025 and weren’t due to return until at least a month later. But it became apparent that all was not well aboard the orbital outpost on January 7 when NASA called off a spacewalk involving Fincke and colleague Zena Cardman that was scheduled for the following day.
Advertisement
In his statement, Fincke declined to give any details about the nature of his medical condition, but at the time the situation was deemed serious enough for newly installed NASA administrator Jared Isaacman to bring the four-person crew home early aboard their SpaceX Dragon spacecraft, which splashed down off the coast of San Diego on January 15.
“I am deeply grateful to my fellow Expedition 74 members — Zena Cardman, Kimiya Yui, Oleg Platonov, Chris Williams, Sergey Kud-Sverchkov, and Sergei Mikayev — as well as the entire NASA team, SpaceX, and the medical professionals at Scripps Memorial Hospital La Jolla near San Diego,” Fincke said in his statement. “Their professionalism and dedication ensured a positive outcome.”
He added that he’s “doing very well and continuing standard post-flight reconditioning at NASA’s Johnson Space Center in Houston,” describing spaceflight as “an incredible privilege,” and how it sometimes “reminds us just how human we are.”
Crewed space missions have, very occasionally, been shortened due to technical issues, but the rescheduling of Crew-11 marks the first time in NASA’s history that an astronaut mission has been cut short over health concerns.
Advertisement
While the ISS has various medical facilities to cope with health emergencies, in this case it was clear that the safest way forward was to bring the crew home.
A new suit filed by the New York attorney general seeks to stop Valve Software’s use of “loot box” mechanics in its popular PC games, accusing the Bellevue, Wash.-based company of making billions of dollars by luring children and teenagers into gambling on rare Counter-Strike skins.
The lawsuit alleges that Valve’s first-party games are essentially an illegal gambling operation aimed at younger players. It seeks to stop Valve from implementing loot box mechanics in its games going forward, as well as hit the company with a fine equal to “three times the amount of its gain from the illegal practices alleged herein.”
While it’s arguably best known for running the digital gaming marketplace Steam, Valve Software also owns and operates several of its own popular PC games, including Counter-Strike 2, Dota 2, and Team Fortress 2.
All three games feature an optional mechanic where players can pay real money in exchange for “loot boxes”: a virtual item that drops a randomly-generated piece of cosmetic gear that can be used in-game. Most of these items have no mechanical impact and are there strictly for looks, such as silly hats in TF2 or neon-painted “weapon skins” in CS2. This might not make sense to you, but there are people in this world who will pay any price to get an appropriately ugly virtual rifle, and these people are part of why Valve CEO Gabe Newell has a superyacht.
Despite their lack of actual effect, loot boxes and item trading are both an extraordinarily lucrative market for Valve. Virtual items for these three games have been sold for staggering amounts of real money. One estimate cited by the AG’s office indicates that the market for Counter-Strike skins alone was worth over $4.3 billion as of last year.
Advertisement
While Valve absolutely benefits from the strangely frenetic market for virtual items in CS2, TF2, and Dota 2 — it sell these loot boxes in the first place, and hosts the secondary market for them via the in-app Steam Marketplace — the occasionally shocking prices for these items is part of a player-created economy.
In an official press release, New York Attorney General Letitia James wrote that “illegal gambling can be harmful and lead to serious addiction problems… Valve has made billions of dollars by letting children and adults alike illegally gamble for the chance to win valuable virtual prizes. These features are addictive, harmful, and illegal, and my office is suing to stop Valve’s illegal conduct and protect New Yorkers.”
GeekWire reached out to Valve for further comment.
The suit against Valve marks the latest in a series of moves by James to tamp down on gambling operations within New York state, such as shutting down 26 online casinos last year, as well as taking aim at Meta and TikTok for those platforms allegedly posing “harms to young people’s mental health.”
Panasonic has unveiled two very different additions to its audio lineup: the audiophile-focused Technics SL-1500CS direct drive turntable and the party-ready SC-BMAX30 transportable speaker.
The Technics SL-1500CS marks the return of the well-regarded SL-1500C. It has been updated with the company’s latest audio innovations such as the ΔΣ-(Delta Sigma) Drive technology, which it has borrowed from more expensive models in its line-up.
Image Credit (Trusted Reviews)
The addition of the ΔΣ-Drive, as well as reducing motor vibrations and improving the rotational stability of the platter, is whyTechnics claims the SL-1500CS delivers “best in class” standard for sonic performance.
A built-in phono equaliser is included, making it easier to connect the deck directly to amplifiers or speakers without the need for dedicated phono preamp. Given the SL-1500C came out seven years, the design has also been a much-needed lick of paint refreshed, with a metallic grey colourway that retaining Technics’ minimalist aesthetic.
Advertisement
Advertisement
The SL-1500CS will retail for £1,099 when it goes on sale from March 2026 onwards.
Image Credit (Trusted Reviews)
From the Panasonic side of the audio department comes a new flagship party speaker in the SC-BMAX30. It’s able to deliver 320W of power, a speaker that’s main intent is to boom out a deep bass performance. We were at Panasonic’s Experience Event in Germany and ti delivers plenty low frequency welly.
Despite its size, portability is clearly a priority. The SC-BMAX30 has wheels, a telescopic handle, and can reportedly last up to 14 hours of battery life.
A metal grille conceals a dynamic lighting system designed to sync with your music, and the design is also rated to IPX4 to protect this party speaker from splashes and light weather.
Advertisement
Advertisement
For connectivity options include Bluetooth Multipoint for pairing two devices simultaneously and Wireless Chain Connection for linking multiple units together. Moreover, there are dedicated mic and guitar inputs for karaoke or live sessions, though as one person pointed out, it could do with some RCA connections.
The SC-BMAX30 goes on sale March 2026, and it is priced at £399.99 / €449.
The recently unveiled x86CSS project aims to emulate an x86 processor within a web browser. Unlike many other web-based emulators, Lyra Rebane’s implementation is written entirely in CSS. More precisely, the x86CSS page hosts a C program compiled with GCC into native 8086 machine code, which is then executed through… Read Entire Article Source link
There was a time when only the richest ham radio operators could have a radio with a panadapter. Back in the day, this was basically a spectrum analyzer that monitored a broad slice of the receiver’s intermediate frequency so you could see signals on either side of the receiver’s actual frequency. Today, with SDR technology and computers, this is an easy thing for receivers to implement. But what if you want to refit a classic radio? It isn’t that hard, and [Mirko Pavleski] shares his notes on how he tackled the project. You can also check it out in the video below.
The plan is simple. A FET amplifier taps the radio’s IF stage before the first IF filter. This provides good isolation and buffering. Then, an emitter follower stage provides a matched output to the SDR through a low-pass filter. The SDR remains tuned to the IF frequency, of course. The rest is essentially software and procedures.
Of course, your exact connection to your radio will differ unless you have the same receiver shown in the video. A modern scope with an FFT should be able to help you quickly locate a good spot, though.
Advertisement
Of course, you could just listen through the SDR, but that doesn’t seem sporting but that’s what it looks like he does in the demonstrations. Essentially, he’s using the radio’s RF system via the first IF mixer, then letting the SDR handle the rest. But you could just use the display and tune the radio instead.
If you really wanted a cool system, you could frequency count the internal frequencies and display the correct frequencies in software. Then you could also track the current frequency. This would make it seem more like a traditional panadpater and less like just replacing most of the radio’s features with an SDR.
It’s always a fun day for the space nerds when a NASA team has new images to share from the James Webb Space Telescope. Today’s pair has brains on the brain, with a look at the fittingly named Exposed Cranium Nebula. More officially, this cloud of space dust and debris is known as Nebula PMR 1. The images shared today may capture a moment in the final stages of a star, as well as giving hints as to how the nebula got its brain-like shape.
“The nebula appears to have distinct regions that capture different phases of its evolution — an outer shell of gas that was blown off first and consists mostly of hydrogen, and an inner cloud with more structure that contains a mix of different gases,” NASA’s blog post reads. The dark line that runs vertically through the nebula, giving it the cranial appearance, could be the result of “an outburst or outflow from the central star, which typically occurs as twin jets burst out in opposite directions.” Both Webb’s Near-Infrared Camera (NIRCam) and its Mid-Infrared Instrument (MIRI) were used to document the nebula.
The US added 57 gigawatt-hours (GWh) of battery storage capacity to its electric grid last year – enough to supply the annual electricity needs of roughly five million homes. The SEIA report projects an additional 21 percent increase by the end of 2026, representing about 70 GWh of new capacity in a single year. Read Entire Article Source link
Over 30 years after the premiere of Neon Genesis Evangelion, Studio Khara announced that a new Evangelionanime is in the works. The franchise follows depressed teenager Shinji Ikari and his allies as they pilot giant robot EVAs to protect the world from the destructive Angels, only to discover another conspiracy at play that threatens all of humanity.
As an Evangelion fan, the prospect of seeing another series in the franchise is very exciting. However, the question lingers: will this new Evangelion anime live up to its predecessor with the talent behind it, and is such a series even necessary?
Evangelion already ended on a high note
Studio Khara / Studio Khara
Through Evangelion’s tumultuous history, audiences have witnessed three different endings to its story in the original anime series, The End of Evangelion, and Evangelion 3.0+1.0 Thrice Upon A Time. The latter film ended with Shinji overcoming his depression and recreating the world into one without EVAs, allowing him and his friends to grow up and have normal lives, giving them all the happy ending they deserve.
This film delivered a satisfying, uplifting conclusion to the long-running anime. Shinji’s decision to break the vicious cycle of trauma and remake the work aligned with the story’s themes of courage and moving forward in life, even if it means feeling pain or making mistakes. which is supposed to bring him and the audience closure after all these years. It was honestly my favorite of the franchise’s three endings, and it didn’t feel like Evangelion needed to go any further.
Advertisement
It’s unknown whether the next Evangelion series will be a prequel, sequel, or remake of the original anime. We may or may not see Shinji and his friends again in this new anime. Fortunately, Evangelion has left open some room for the story to progress after the most recent film.
Evangelion 3.0+1.0 revealed that Shinji and the rest of the cast were trapped in a time loop, forcing them to relive the franchise’s story as seen in the original show and films. This concept makes it easier for us to imagine the upcoming anime starting fresh while continuing where Evangelion’s story left off.
The new anime’s creators suggest a fresh, faithful follow-up
Studio Khara / Studio Khara
Though Evangelion creator Hideaki Anno won’t helm this new anime, it still features some returning talent working behind the scenes. Rebuild of Evangelion film director Kazuya Tsurumaki will lead the project alongside Evangelion 3.0+1.0 assistant director Toko Yatabe. Tsurumaki has worked closely with Anno since the days of Neon Genesis Evangelion, so someone with this much experience with the franchise should ensure the new anime stays true to the source material.
What is truly remarkable is that the upcoming Evangelion series will be written by Yoko Taro, who created the hit sci-fi video game series NieR. Similar to Evangelion, the NieR franchise has told subversive stories about characters who grapple with loneliness and seek purpose in life, all while depicting robots fighting in a post-apocalyptic world. Yoko has gone as far as to call NieR: Automata‘s story a retelling of Evangelion. Given Yoko’s success as a subversive storyteller and reverence for Evangelion, his writing a new chapter in the latter’s story would be interesting, to say the least.
The new anime will also be produced by Hideaki Anno’s animation company, Studio Khara, which produced the four Rebuild of Evangelion films. It will also be produced by CloverWorks, the animation studio behind popular shows like Darling in the Franxx, The Promised Neverland, Rascal Does Not Dream of Bunny Girl Senpai, and My Dress-Up Darling.
Advertisement
We’ve seen both Khara and CloverWorks deliver some top-tier animation with their respective projects. The two of them working together to revive Evangelion would lead to a dynamic, layered, and vibrant installment to an already stunning anime.
Will Evangelion be the same without Anno?
Studio Khara / Studio Khara
The Evangelion anime has long been the brainchild of Hideaki Anno. The acclaimed writer poured his experiences with clinical depression into the characters’ psychological journeys, particularly Shinji’s, presenting a harrowing, thought-provoking anime unlike any other.
Anno seemed to have finally made peace with his beloved anime with his final Rebuild of Evangelion film. Just as Shinji broke free from the cycle of sadness and violence that dominated his life, Anno finally concluded his magnum opus after writing multiple endings, allowing him to move on from the franchise after working on it for so long.
It’s hard to imagine this franchise without Anno at the helm. It’s also unclear what the new Evangelion creators will add to such a personal story that will allow it to stand out while honoring Anno’s work. However, Anno said in a 2016 interview that he hopes other creators will work on Evangelion in his stead.
“…I want them to be appealing works; it won’t be without specific conditions, but I will not confine them to what my works have established. Just like Gundam, which keeps continuously supporting the animation world, Eva can become a new pillar. After all, it is the purpose that led me to resume through the New Theatrical Versions. I want to maintain this pillar, which carries the animation world…,” said Anno. ‘I do this for the well-being of the animation industry. Gundam can be enjoyed through various works, and it would be nice if Eva can develop in the same way. I think it’s better if there is a diversity in the works.”
Advertisement
While Evangelion has long been a reflection of the man who created it, it is clear that the franchise has grown far beyond Anno, and he is happy to let the saga move forward with someone else.
Ultimately, Evangelion’s story is destined to repeat itself, but it seems to be for the better. An installment to such a prestigious franchise will undoubtedly be a huge economic boon for the animation industry. It will also keep Anno’s legacy alive and allow a new generation of viewers and creators to take in this story and conjure their own interpretations. Whatever this anime has in store for us, it will be a bold, new beginning for Evangelion worth watching.