Anyone looking for a new opportunity in the engineering space should consider one of the following courses as part of their upskilling process.
While it is an innovative, exciting and dynamic sector, the STEM space and the careers that exist within it demand consistent and up-to-date training in order to ensure professionals are operating at their peak. Upskilling is an essential element of engineering careers and with that in mind, the following courses could be an ideal way for you to stay in the know and ahead of the game.
Coursera
For those looking for an introductory course to be undertaken flexibly, IBM via Coursera is running a free Introduction to Software Engineering programme. It consists of six modules, is aimed at beginners and can be managed at the learners’ own pace. It also comes with a certificate upon completion of the course.
Also available via Coursera, with free online enrollment, is a 12-week introduction to Systems Engineering Specialisation, offered by the University of Colorado, Boulder. The beginner-level programme can be accessed in eight different languages and aims to teach the fundamentals, methods, practices and processes of industry-standard systems engineering.
Advertisement
Coursera also offers a range of courses designed for more advanced learners. For example, a Microsoft AI & ML Engineering Professional Certificate. The free six-month programme aims to prepare students and professionals for a career in artificial intelligence and machine learning. Coursera itself also has a free four-week Deep Learning Engineering Specialisation course for advanced students.
Further Education and PLC
For students based in Louth, looking to earn their level five qualifications, there is a Further Education and PLC, level five QQI course in Engineering Technology. The programme is said to equip learners with the skills and knowledge needed to gain employment, access apprenticeship programmes and progress to further study in universities and institutes of technology. Successful graduates will have the opportunity to find employment in areas such as engineering machine operations, civil engineering, electrical engineering and advanced manufacturing, alongside others.
In Dunboyne, Meath there is a year-long level five Engineering Technology course open to prospective students looking towards future education. It is a pre-university engineering course designed to to prepare the student for work in the field of engineering via entry into a third-level institution. Students who graduate from this course can pursue further study at degree level and will be well placed to gain apprenticeships in the many different engineering sectors through Generation Apprenticeship Ireland. There are a range of level five opportunities available nationwide, so be sure to find one that is convenient.
South East Technological University
South East Technological University (SETU) has dozens of opportunities for engineering students and professionals, to fit a range of lifestyles and ambitions. Courses for those looking for their bachelor’s degree include standard four-year programmes in areas such as agricultural systems engineering, electronic engineering, aerospace engineering and automation engineering, among others.
Advertisement
For more established students, there are also several one-year master’s degrees, such as the Master of Science in Engineering Research and Innovation and Master of Science in Sustainable Energy Engineering. Different courses will have unique requirements, commitments and prices, so make sure to read up on your chosen course first.
Udemy
Educational platform Udemy has a free Engineering Mechanics Fundamentals to Proficiency course, which can be undertaken at the learners convenience online. The programme is aimed at beginners, covers a range of topics such as foundational mechanics and principles and will require an understanding of basic maths and physics. Udemy has dozens of free and paid options, to suit a variety of budgets and lifestyles, including, The Complete Full Stack AI Engineering Bootcamp, Site Reliability Engineering and The Complete Mechanical Engineering Course, which claims to offer 12 courses in one.
Whether you are a complete novice, an enthusiast, a graduate or an established professional, there is really no incorrect way to engage with learning, provided you have a clear idea of what it is you hope to achieve from the experience. So make sure to do your research, identify your weaknesses and shop around for the course or learning materials that match your ambitions and available resources.
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.
An anonymous reader quotes a report from Ars Technica: Prospective Vizio TV buyers should know there’s a good chance the set won’t work properly without a Walmart account. In an attempt to better serve advertisers, Walmart, which bought Vizio in December 2024, announced this week that select newly purchased Vizio TVs now require a Walmart account for setup and accessing smart TV features. Since 2024, Vizio TVs have required a Vizio account, which a Vizio OS website says is necessary for accessing “exclusive offers, subscription management, and tailored support.” Accounts are also central to Vizio’s business, which is largely driven by ads and tracking tied to its OS.
A Walmart spokesperson confirmed to Ars Technica that Walmart accounts will be mandatory on “select new Vizio OS TVs” for owners to complete onboarding and to use smart TV features. The representative added: “Customers who already have an existing Vizio account are being given the option to merge their Vizio account with their Walmart account. Customers with an existing Vizio account can opt out by deleting their Vizio account.” The representative wouldn’t confirm which TV models are affected. Walmart’s representative said the Walmart account integration is “designed to respect consumer choice and privacy, with data used in aggregated, permissioned, and compliant ways” but didn’t specify how.
The Washington Technology Industry Association held its Tech in Focus roundtable on March 25, 2026, in Seattle. Credit: Ken Yeung
Washington state may have everything it needs to become a global AI hub. The problem is, it hasn’t figured out how to say so, and its political and tech leaders agree it’s time they got to work on it.
On Wednesday, the Washington Technology Industry Association (WTIA) convened a roundtable of civic and industry leaders from throughout the Seattle region to ask a pointed question: What will it actually take for Washington state to stop playing catch-up with Silicon Valley and start leading?
In it, the author and futurist Alex Lightman argues the Emerald City holds six distinct advantages over rival tech hubs: abundance of clean energy, a backyard full of hyperscalers like Microsoft and Amazon, an acceptance of using AI to continuously improve AI and software, access to quantum computing, the ability to run large-scale simulations cheaply, and a growing foothold in space technology.
These assets, he contends, are what position Seattle to become a top-five U.S. city economically, comparable to a G7 economy with a $1 trillion GDP.
Advertisement
Yet while WTIA’s white paper largely shows that the city has incredible potential, the lobbying group emphasizes that it is a roadmap. The real challenge is to figure out what happens next. Once the talking is done, who’s going to organize the effort to transform the state?
Nick Ellingson reviews WTIA’s “Seattle’s AI Advantage” whitepaper at a roundtable event at the offices of K&L Gates on March 25, 2026. Credit: Ken Yeung
“I think one of the most important things we can do is start telling this story,” said Randa Minkarah, WTIA chief operations executive, referring to Washington’s need to establish itself as a leading, responsible AI and advanced technology region. “How do we get that out there that changes people’s point of view?”
Once that narrative takes hold, it can create momentum—”a storytelling flywheel” that spreads best practices and lessons across communities and organizations, Minkarah added.
Washington’s struggle to tell a coherent AI story isn’t caused by a single issue, but rather by a host of issues. Rachel Smith, president of the Washington Roundtable, pointed to a three-way misalignment between federal priorities and dollars, state priorities and dollars, and what is actually happening on the ground in communities.
Advertisement
“When those things are all misaligned, it feels like we spend a whole lot of money and we don’t get a whole lot out of it,” she said.
Smith called for a broader strategy focused on economic competitiveness and tax reform. This is a topic of debate after state lawmakers approved a new income tax on high earners this month. One investor in the audience underscored the issue, noting that some of the people writing checks in Washington’s tech ecosystem have moved their residences out of state.
Beau Perschbacher, senior policy advisor for Governor Bob Ferguson, participated in the WTIA roundtable discussion on how to make Washington a global AI state. Credit: Ken Yeung
There’s also the failure to make AI’s benefits accessible to everyday Washingtonians, as indigenous communities and local residents feel excluded. And compounding the issue is the lack of strategic alignment, as Washington has pared back its economic development strategy. That’s not what community leaders want—they want Olympia to take the lead.
“That is a place where the state having a direction on the AI industry, where we want to go, would be super helpful,” Canedo remarked. Beau Perschbacher, Governor Bob Ferguson’s Senior Policy Advisor for Economic Development, didn’t disagree.
So what actually needs to happen?
Advertisement
Panelists didn’t hold back when asked what Washington’s leaders must do in the next 24 months: Joe Nguyen, a former Washington State senator and CEO of the Seattle Metropolitan Chamber of Commerce, wants more risk-takers—businesses willing to be first movers in adopting AI within their industries and then evangelize what’s possible.
Jesse Canedo, chief economic development officer for the City of Bellevue, hopes operators can execute on the white paper’s vision.
“Seattle as a region does a lot of great visioning,” he said. “It needs a lot of operationalizing of the big, bold ideas…Housing, people, and energy are the three big things that we can operationalize very quickly out of this vision.”
Not everyone agreed on the path forward.
Advertisement
Alvin Graylin, a fellow at Stanford’s Institute for Human-Centered Artificial Intelligence, argued that Washington should position itself as a global hub for open-source AI rather than following Silicon Valley’s closed-model, big-spending approach.
He pointed to Chinese labs producing near-equivalent models at a fraction of the cost, and said Washington could tap into millions of open-source developers worldwide rather than competing for a few thousand elite researchers at big labs.
Futurist Alex Lightman discusses his WTIA-commissioned whitepaper on Seattle’s AI advantage. Credit: Ken Yeung
Lightman, the white paper’s author, was skeptical. He noted that Microsoft made Netscape’s browser irrelevant by giving its own browser away, then made trillions selling everything around it. Open source has a ceiling, he argued, and it wouldn’t get Seattle to a trillion-dollar economy.
Separately, Perschbacher wants more federal funding to come to the state, and to improve community outreach to bring more people along as partners.
Can these leaders take all of their ideas and turn them into action? At the very least, the WTIA secured two pledges: The Washington Roundtable and the Seattle Metro Chamber both said they would work with the Governor’s office to shape a statewide economic development strategy, and Perschbacher committed to leading a federal funding working group.
Advertisement
Others joining the conversation included Alicia Teel, deputy director of Seattle’s Office of Economic Development. In addition to Minkarah, representing WTIA were Vice President of Innovation and Entrepreneurship Nick Ellingson, Chair of the Advanced Technologies Cluster Arry Yu, and Director of Industry and Community Relations Terrance Stevenson.
Have you ever asked a chatbot something and felt like it completely missed your point? You say something with a bit of nuance, and the AI misses the subtlety entirely. That is exactly the problem researchers are trying to solve.
The research, by Zhifeng Yuan and Jin Yuan, introduces a model that can break down a sentence and understand how you feel about each part, instead of generalizing everything into one response.
How this system helps AI read your intent better
Think about a sentence like, “The food was great, but the service was terrible.” A typical AI chatbot might struggle because the sentence has both positive and negative emotions.
Advertisement
Unsplash
The proposed model looks at each part of the sentence separately and connects each emotion to the right subject. It relies on an ‘emotional keywords attention network’ to do that.
In simple terms, it teaches AI to focus on words that carry strong emotions, such as “great” or “terrible.” These words guide the system toward understanding what matters most in the sentence.
The model then links those emotional cues to a specific aspect. It learns that “great” applies to food, while “terrible” applies to service. This process, known as aspect-level sentiment analysis, makes responses far more precise.
It also uses attention mechanisms to understand context, so it does not rely on keywords alone. It can figure out how different parts of a sentence connect. Researchers say this method performs better than existing models on standard benchmarks.
This approach can make AI chatbots feel more human
Unsplash
If adopted widely, this could change how AI responds in real-world situations. Chatbots could handle nuanced feedback more effectively instead of defaulting to generic replies. Customer support systems could pinpoint exactly what went wrong and respond with greater accuracy.
With the rapid rise of autonomous agents like OpenClaw and Anthropic’s Claude Work, along with the wide range of opinions about their impact on the future of work, it is not surprising to see renewed interest in workplace PCs. Add to that Intel’s recent release of commercial vPro versions of… Read Entire Article Source link
Just when the sub $1,000 streaming amplifier category had turned into a predictable arms race of inputs, outputs, and firmware promises, along came the Marantz Model M1 with that unmistakable Marantz swagger that is now backed by HEOS multi-room integration and Dirac Live room correction to give it some real-world muscle. Sure, the WiiM Amp Ultra and Eversolo Play might dazzle you with more HDMI ports, coaxial inputs, and firmware update promises than a Tesla—but do they offer this much soul? Doubtful.
Here’s the part nobody in the industry really wants to say out loud. The future isn’t being decided in six-figure listening rooms with Italian racks and cables that cost more than your first car. It’s being decided in apartments, offices, and living rooms where people want one box, real performance, and no drama.
The question is whether the industry actually leans into that shift or keeps pretending the old model still scales. Brands like Fosi, WiiM, Bluesound, NAD, Denon, Marantz, Yamaha, and Cambridge Audio clearly see where the market is going. Others? Still chasing a shrinking pool of traditional audiophiles with very deep pockets and very finite patience.
Advertisement
Marantz, to its credit, is covering both ends of the spectrum. The Model M1 reflects where the market is heading, while the Model 10 represents its high-end ambitions; and it’s one of the better implementations of Class D amplification we’ve seen, even if the price puts it out of reach for most buyers. Between those two sits a full range of AVRs and stereo receivers that bridge the gap and make a lot more sense for how people actually build systems today.
Marantz Model M1
Marantz Model M1 Features and Connectivity: Fewer Ports, More Purpose
The Marantz Model M1 is designed as a compact, all-in-one streaming amplifier that simplifies system building without stripping away capability. Rated at 100 watts per channel into 8 ohms with very low distortion, it has enough power to drive a wide range of bookshelf and smaller floorstanding speakers—within reason, of course.
The inclusion of a dedicated subwoofer output with adjustable crossover and ±15dB level trim adds real flexibility for 2.1 setups, allowing for proper integration rather than guesswork.
Unlike traditional integrated amplifiers that juggle analog and digital signal paths, the M1 operates as a digital-first platform. It supports high resolution PCM up to 24-bit/192 kHz and DSD playback, handling content from streaming services, network storage, or direct USB input with consistency. This approach keeps the signal path clean and controlled, which aligns with Marantz’s goal of delivering a more refined and stable sonic presentation rather than chasing raw specification extremes.
Advertisement
Marantz Model M1
Connectivity is focused but practical. Wireless options include Bluetooth, AirPlay 2, Qobuz Connect, TIDAL Connect, Spotify Connect, while HEOS provides the backbone for multi-room audio with support for up to 32 zones. HEOS also enables integration with home control systems such as Control4, URC, and Crestron, making the M1 viable in both simple and more complex installations.
It also works as a Roon player, although that requires an active Roon subscription and a Roon Core running on your network. The Core acts as the media server and can be hosted on a computer, NAS drive, or other compatible hardware.
For TV integration, HDMI eARC allows the M1 to function as a legitimate soundbar alternative with proper stereo imaging and significantly better amplification. Volume and power control can be handled directly through the TV remote, and the unit can be tucked out of sight without losing usability thanks to full app control and IR learning capability for third-party remotes.
Advertisement. Scroll to continue reading.
One limitation worth noting is the lack of a built-in phono stage. Vinyl playback requires either a turntable with a built-in preamp or an external phono stage connected to the analog input. It’s a deliberate omission that reinforces the M1’s digital-first identity, but one that analog-focused users will need to plan around.
Advertisement
Onboard Dolby Digital+ decoding supports the audio codecs commonly used by broadcast and streaming TV services, making the Model M1 a viable upgrade over a typical soundbar. Additional options include Dialogue Enhancer for clearer vocals and a Virtual mode that uses Dolby processing to create a more immersive sound field from stereo content.
The Model M1 can also be paired with additional units for multi-room or expanded system setups, and its compact chassis allows two units to fit side-by-side in a standard 19-inch equipment rack if needed.
Cooling is handled through passive thermal management, so there are no fans to introduce noise or potential failure points. Combined with threaded mounting points on the bottom panel, this allows the amplifier to be installed cleanly on a wall bracket or inside cabinetry without concerns about heat buildup.
The Model M1 measures 8-9/16 inches wide, 3-3/8 inches high, and 9-15/16 inches deep, weighs 4.84 pounds, and includes a 5-year warranty.
Advertisement
Building a System Around the Marantz Model M1
This is where things get practical. The goal here isn’t to be cheap, it’s to be smart. There’s a difference. Chasing the lowest price usually ends with compromises you can hear five minutes into your first album. The better play is finding speakers that won’t wreck your bank account, because let’s be honest, gas and electric bills are already doing a fine job of that, but still deliver real synergy with the M1 without forcing you into endless EQ tweaks.
That matters more than ever with a product like this. The Model M1 has the control and resolution to expose mismatches, but it’s also forgiving enough to reward a well-balanced pairing. You may not even need a subwoofer depending on your room size and speaker choice, which simplifies things even further. And now that Dirac Live room correction is part of the equation, you’ve got a tool that can actually address room issues that used to derail setups like this. Not a miracle cure, but a serious advantage if you use it properly.
I rotated through the DALI Kupid, Q Acoustics 3020c, Acoustic Energy AE100 MK2, and stepped up to the Wharfedale Diamond 12.3 and Q Acoustics 5040 floorstanders to see how far the M1 could stretch without things getting stupid.
The goal wasn’t to build some aspirational system that lives on a dealer floor. I kept the ceiling under $3,000 for a straightforward two-channel setup, and around $5,000 if you add a turntable or a compact subwoofer. Real-world money. Real-world rooms. The kind of systems people actually use in a den, living room, or bedroom without needing a second mortgage or a dedicated listening shrine.
Advertisement
For some people, the first question is obvious: can this small box actually drive medium to higher-sensitivity floorstanding speakers, or is that pushing it? The answer is yes—with some limits. It comes down to how loud you listen and how much space you’re trying to fill.
In my setup, both the Wharfedale Diamond 12.3 and Q Acoustics 5040 proved to be very workable pairings, but placement matters. These aren’t speakers you shove against a wall and forget about. They need roughly 2 to 3 feet of space behind them and at least 2 feet from the side walls to open up properly.
Give them that breathing room and they reward you with excellent imaging and a presentation that pulls away from the cabinets. The soundstage stretches wide, with a convincing sense of height, and both models do a very good job of disappearing when everything is dialed in correctly.
Advertisement. Scroll to continue reading.
Advertisement
Darkness on the Edge of Town?
From a tonal perspective, the M1 leans slightly to the dark side of the Force, but not at the expense of clarity, speed, or overall presence. It’s not veiled or slow—it just carries more weight and density through the midrange and bass. Compared to something like the WiiM Ultra, the difference is obvious. The M1 delivers more texture and physicality, while the WiiM chases a bit more sparkle and top-end detail. The Marantz never comes across as thin or clinical.
If you’re familiar with Audiolab’s integrated and streaming amps, this goes in the opposite direction. Audiolab tends to run cool, clean, and very controlled, sometimes to the point of feeling a little detached. The M1 adds body, more impact down low, and a sense of drive that makes music feel less polite. You do give up some resolution and edge definition in the bass compared to Audiolab, but the trade-off is a more engaging and substantial presentation.
That character really shows itself with electronic music. Deadmau5, Boards of Canada, Aphex Twin, Kraftwerk, Tangerine Dream; the M1 hits harder and fills in the space between notes in a way that feels more physical. It’s less about precision and more about momentum. Think thick Crayola markers versus ultra-fine ink pens. The Audiolab and WiiM draw cleaner lines, but the Marantz isn’t afraid to color outside them, and for this kind of music, that’s exactly the right move.
Switching over to vocals, the M1 keeps that same tonal balance intact. Male vocals come through with solid texture and weight, sitting slightly forward without sounding pushed. There’s a fullness here that works well with most recordings, but the speaker pairing makes a noticeable difference. I preferred vocals through the Q Acoustics 5040 over the Wharfedale Diamond 12.3; the 5040 offers better resolution and cleaner lower midrange detail, which gives voices more definition without thinning them out.
Advertisement
Sam Cooke, Elvis, Nick Cave, Jason Isbell, and John Prine all came across smooth and grounded. For some listeners, that might tip a bit too far into “safe,” depending on the speaker. Nick Cave in particular benefited from the added weight, but I missed a bit of the edge and growl that defines his delivery. The M1 doesn’t strip away character, but it does round things off slightly.
Left to right: Acoustic Energy AE100 MK2, DALI Kupid, Q Acoustics 3020c bookshelf speakers
Bookshelf Speakers and the Marantz M1: Where Synergy Wins
The bookshelf choices here weren’t random. The DALI Kupid, Q Acoustics 3020c, and Acoustic Energy AE100 MK2 were picked with a specific goal in mind; maximize performance without turning the room into an equipment shrine. These are the kinds of speakers that can live on proper stands or sit cleanly on a credenza under a TV and still deliver a convincing, full-range experience.
To make that work, they had to check a few non-negotiable boxes: real presence, enough impact to carry both music and movie soundtracks, strong imaging, and a soundstage that doesn’t collapse the second you move off-axis. This isn’t about chasing perfection which isn’t realistic at this price point. it’s about building a system that actually works in a real room, with real constraints, and still sounds like you didn’t cut corners.
For a deeper look at all three, you can check out my shoot-out results, but the short version is that each brings something worthwhile to the table with the M1. The Q Acoustics 3020c is the most complete of the group, offering more output, a wider soundstage, and better overall resolution. The Acoustic Energy AE100 MK2 trades some of that refinement for greater low-end presence and a punchier upper bass and lower midrange, which gives it more weight with rock and electronic tracks.
Advertisement
The DALI Kupid is the most lively of the three, with a more energetic top end that adds air and sparkle without tipping into harshness. That’s not an accident; DALI has a long track record of getting tweeter design right, and it shows here. It’s open and engaging, but never brittle. That said, its U.S. pricing feels a bit ambitious given its size and low-end extension, especially when compared to how it’s positioned in other markets.
So what would I actually buy? Having lived with both pairs of floorstanders, along with the Q Acoustics 3020c and Acoustic Energy AE100 MK2, it’s a lot easier to sort through what works and what doesn’t. On the floorstanding side, I’d lean toward the Q Acoustics 5040—but with a clear condition. Keep them in a reasonably sized room. My den in New Jersey (16 x 13 x 9), the home office I’m converting (21 x 13 x 9), and my Florida setup (15 x 12 x 9) are all good examples of spaces where speakers like the 5040 or Wharfedale Diamond 12.3 make sense. They fill the room without overloading it with bass or turning placement into a constant battle.
On the bookshelf side, I tend to favor the DALI and Q Acoustics pairings for their balance of clarity, imaging, and overall ease of placement. They’re the safer choices if you want something that just works across music, TV, and movies. But if you’re after more low-end weight and a stronger push through the upper bass and lower mids, the Acoustic Energy AE100 MK2 is the sleeper here. It doesn’t get talked about enough. The pacing is excellent, it has real punch for its size, and it looks far more expensive than it has any right to.
But what about HEOS control? That’s going to matter more than anything for a lot of people. In my case, it’s pretty straightforward. I use TIDAL and Qobuz almost exclusively, so having access to TIDAL Connect and Qobuz integration is what I actually care about. Roon isn’t part of the equation anymore. I sold my Nucleus and haven’t looked back. With a 2TB drive on the network holding more than 1,900 CDs ripped to FLAC, I already have everything I need locally without adding another layer of software into the chain.
Advertisement
Advertisement. Scroll to continue reading.
Before wrapping things up, I also tested the M1 with HDMI eARC across all three of my TVs in New Jersey using a QED cable. No drama. It locked in immediately with no handshake issues, and control worked exactly as expected. Movies and TV were an immediate upgrade. “Landman,” “The Madison” on Paramount+, and even NHL games all benefited from the added scale, clarity, and tonal weight. It’s not even a fair fight compared to internal TV speakers or most of the soundbars I’ve used. I’ll take a proper stereo soundstage and believable dynamics over fake surround tricks every time.
The Bottom Line
The Marantz Model M1 doesn’t try to outgun the competition on features—and that’s the point. It delivers a cohesive, full-bodied sound with real texture, strong midrange presence, and enough power to drive the kinds of speakers people actually use in real rooms. HEOS keeps everything connected, HDMI eARC works without the usual nonsense, and Dirac Live gives you a legitimate tool to deal with room issues instead of pretending they don’t exist.
What you don’t get is just as important. No phono stage, limited analog inputs, and it’s not chasing razor-sharp treble detail or lab-grade precision. This isn’t for someone building a shrine to separates. It’s for someone who wants a clean, compact system that sounds right and doesn’t require a manual and a weekend to figure out. At $1,000, it earns its keep—and then some.
Advertisement
Editors’ Choice in the Network Amplifier category for those who can swing the price and have similar speaker options.
Pros:
Full-bodied, engaging sound with strong midrange and bass weight
Works well with both bookshelf and smaller floorstanding speakers
HEOS integration with built-in TIDAL Connect, Qobuz, and Roon support
HDMI eARC performs reliably in real-world use
Dirac Live adds meaningful room correction capability
Compact design with flexible placement options
Excellent system-building platform for 2.0 or 2.1 setups
Cons:
No built-in phono stage
Limited analog connectivity
Slightly rounded treble may not appeal to detail-focused listeners
UL’s Dr Kyriakos Kourousis discusses his current research in metal additive manufacturing and the work of the Metal Plasticity and Additive Manufacturing Group at UL.
Dr Kyriakos Kourousis is an associate professor in aeronautical engineering at University of Limerick (UL), as well as director of postgraduate research and education for the university’s Faculty of Science & Engineering. He also leads UL’s Metal Plasticity and Additive Manufacturing Group.
Kourousis joined UL’s School of Engineering 12 years ago, and before his career in academia, he spent more than a decade as an aeronautical engineer in the Hellenic Air Force working on aircraft maintenance, airworthiness and structural integrity – experience that he says now shapes his research and teaching.
At UL, he teaches topics around aircraft systems, the airworthiness of aircraft and the practical engineering behind them.
Advertisement
In terms of his current research, Kourousis says his work focuses on two things: how metals behave when they are loaded in a repeated way, leading to permanent deformation – “what engineers call metal plasticity” – and how to make and trust 3D‑printed metal parts (metal additive manufacturing), “especially for those loading conditions that cause plasticity”.
“In simple terms, we test metals, study their microstructure, build computer models that predict how they’ll perform over time, and use those models to predict how permanent deformation builds up during their operation,” he tells SiliconRepublic.com.
“Localised permanent deformation (plasticity) is the origin of fatigue in metals. My work is both on traditional metals and 3D‑printed ones.”
Here, Kourousis tells us about his work and provides a look into the world of 3D-printed materials and aeronautical engineering.
Advertisement
Why is your research important?
As 3D‑printed metal parts move from prototypes to real aircraft and machinery, we need to predict their behaviour with confidence. Experimental data and models help engineers design parts that won’t crack or fail early, and help industry and regulators build the evidence needed for certification. In short, better predictions mean safer, lighter, more efficient products.
Also, from a sustainability point of view, the use and reuse of powder in metal additive manufacturing offers an important advantage over other (traditional) manufacturing processes. However, with each reuse cycle, the recycled powder changes its synthesis and overall ‘quality’, which can have an effect on the produced parts, especially in terms of their plasticity behaviour.
What has been the most surprising/interesting realisation or discovery you’ve uncovered as part of this research?
One key finding is how directional 3D‑printed metals can be and what causes this directionality. For example, we showed that changing the build orientation and the post-3D printing processing of steel parts via heat treatments can noticeably change how it stretches and yields. We saw similar effects in 3D-printed titanium, in particular Ti‑6Al‑4V, which is widely used in the aerospace and biomedical industries.
We’ve also found that even lower‑cost metal 3D printing routes (like material‑extrusion/fused filament fabrication) show clear links between print settings and mechanical performance, useful for small/medium companies exploring affordable metal additive manufacturing.
Advertisement
What are some common misconceptions of your research area?
3D‑printed metals aren’t ‘just like’ traditional (wrought) metals. The layer‑by‑layer process creates a directional ‘grain’, so properties change with build direction, clearly shown in our work on steel and titanium. Process signatures matter. Printing can leave tiny pores (lack‑of‑fusion or keyhole) and locked‑in residual stresses; tuning scan strategy and energy helps, but these features still drive plasticity and fatigue if not managed.
An interesting debate I have with colleagues working in material science is that 3D-printed material may appear as having uniform features in the microscale, but the higher scale defects caused by the melting-solidification and re-melting can lead to a quite non-homogeneous part with differing mechanical properties at different loading directions (mechanical anisotropy).
Post‑processing can close the loop. Ageing/stress‑relief and especially hot isostatic pressing (HIP) homogenise the microstructure and seal pores, boosting ductility and fatigue, though outcomes depend on the as‑built quality and the budget available. A key target for the manufacturing industry is to make 3D printing not only accurate and consistent but also affordable, and we see that there is more work that has to be done there.
What has been the most significant development in your field since you started your academic career?
The big shift is the coming‑together of accessible metal 3D‑printing equipment with advanced, physics‑based modelling.
Advertisement
At UL, a milestone was obtaining a GE Concept Laser Mlab Cusing R metal 3D printer through a GE Additive award. Unlike other institutions in Ireland, our 3D printer is hosted within an industrial environment, through a collaborative agreement with our partner, Croom Medical. Our students and researchers can test ideas under realistic conditions, while both UL and Croom Medical leverage the advantages of this strategic partnership.
Can you tell me a bit about the Metal Plasticity and Additive Manufacturing Group at UL?
Our research group leads the metal additive manufacturing research activity in UL.
Our work is built around two main strands: metal plasticity modelling, where we turn lab data into reliable models of how metals actually deform; and metal additive manufacturing, where we study and improve metals such as titanium and steel, translating the results into practical build and heat‑treatment guidelines. Current projects and student work span physics‑informed yield prediction for steel 316L, laser powder bed fusion (the most widely used additive manufacturing method for metals) process optimisation, and corrosion-cyclic plasticity topics for aerospace‑grade alloys.
An interesting recent work involved showing that, by carefully retuning laser power, scan speed and hatch spacing, we can shift from the usual thin‑layer settings to much thicker layers in laser powder bed fusion of aerospace‑grade titanium, while keeping the process stable and parts dense. Led by one of our doctoral researchers who also works with Croom Medical, the study showed that those thicker‑layer builds delivered strength and ductility on a par with conventional settings, indicating that productivity can rise without an automatic hit to material performance.
Advertisement
Most importantly, after standard vacuum heat treatment and hot‑isostatic pressing, the parts satisfied the relevant industry standards, pointing to a practical path to higher throughput that still fits certification expectations.
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.
In newly issued guidance, UK officials outlined the timeline for shutting down legacy mobile infrastructure. Operators have already switched off 3G services, and 2G is set to follow between 2029 and 2033. Users are being urged to prepare ahead of time, as not all devices will make the transition intact. Read Entire Article Source link
The WIRED Reviews Team has been covering Amazon’s Big Spring Sale since it began at on Wednesday, and the overall deals have been … not great, honestly. So far, we’ve found decent markdowns on vacuums, smart bird feeders, and even an air fryer we love, but I just saw that Cadence Capsules, those colorful magnetic containers you may have seen on your social media pages, are 20 percent off. (For reference, the last time I saw them on sale, they were a measly 9 percent off.)
If you’re not familiar, they allow you to decant your full-sized personal care products you use at home—from shampoo and sunscreen to serums and pills—into a labeled, modular system of hexagonal containers that are leak-proof, dishwasher safe, and stick together magnetically in your bag or on a countertop. No more jumbled, travel-sized toiletries and leaky, mismatched bottles and tubes.
Cadence Capsules have garnered some grumbling online for being overly heavy or leaking, but I’ve been using them regularly for about a year—I discuss decanting your daily-use products in my guide to How to Pack Your Beauty Routine for Travel—and haven’t experienced any leaks. They do add weight if you’re trying to travel super-light, and because they’re magnetic, they will also stick to other metal items in your toiletry bag, like bobby pins or other hair accessories. This can be annoying, especially if you’re already feeling chaotic or in a hurry.
Otherwise, Capsules are modular, convenient, and make you feel supremely organized—magnetic, interchangeable inserts for the lids come with permanent labels like “shampoo,” “conditioner,” “cleanser,” and “moisturizer.” Maybe you love this; maybe you don’t. But at least if you buy on Amazon, you can choose which label genre you get (Haircare, Bodycare, Skincare, Daily Routine). If this just isn’t your jam, the Cadence website offers a set of seven that allows you to customize the color and lid label of each Capsule, but that set is not currently on sale.
The Supreme Court tossed out a billion-dollar verdict against an internet service provider (ISP) on Wednesday, in a closely watched case that could have severely damaged many Americans’ access to the internet if it had gone the other way.
Wednesday’s decision in Cox Communications v. Sony Music Entertainment is part of a broader pattern. It is one of a handful of recent Supreme Court cases that threatened to break the internet — or, at least, to fundamentally harm its ability to function as it has for decades. In each case, the justices took a cautious and libertarian approach. And they’ve often done so by lopsided margins. All nine justices joined the result in Cox, although Justices Sonia Sotomayor and Ketanji Brown Jackson criticized some of the nuances of Justice Clarence Thomas’s majority opinion.
Some members of the Court have said explicitly that this wary approach stems from a fear that they do not understand the internet well enough to oversee it. As Justice Elena Kagan said in a 2022 oral argument, “we really don’t know about these things. You know, these are not like the nine greatest experts on the internet.”
Thomas’s opinion in Cox does a fine job of articulating why this case could have upended millions of Americans’ ability to get online. The plaintiffs were major music companies who, in Thomas’s words, have “struggled to protect their copyrights in the age of online music sharing.” It is very easy to pirate copyrighted music online. And the music industry has fought online piracy with mixed success since the Napster Wars of the late 1990s.
Advertisement
Before bringing the Cox lawsuit, the music company plaintiffs used software that allowed them to “detect when copyrighted works are illegally uploaded or downloaded and trace the infringing activity to a particular IP address,” an identification number assigned to online devices. The software informed ISPs when a user at a particular IP address was potentially violating copyright law. After the music companies decided that Cox Communications, the primary defendant in Cox, was not doing enough to cut off these users’ internet access, they sued.
Two practical problems arose from this lawsuit. One is that, as Thomas writes, “many users can share a particular IP address” — such as in a household, coffee shop, hospital, or college dorm. Thus, if Cox had cut off a customer’s internet access whenever someone using that client’s IP address downloaded something illegally, it would also wind up shutting off internet access for dozens or even thousands of innocent people.
Imagine, for example, a high-rise college dormitory where just one student illegally downloads the latest Taylor Swift album. That student might share an IP address with everyone else in that building.
The other reason the Cox case could have fundamentally changed how people get online is that the monetary penalties for violating federal copyright law are often astronomical. Again, the plaintiffs in Coxwon a billion-dollar verdict in the trial court. If these plaintiffs had prevailed in front of the Supreme Court, ISPs would likely have been forced into draconian crackdowns on any customer that allowed any internet users to pirate music online — because the costs of failing to do so would be catastrophic.
Advertisement
But that won’t happen. After Cox, college students, hospital patients, and hotel guests across the country can rest assured that they will not lose internet access just because someone down the hall illegally downloads “The Fate of Ophelia.” Thomas’s decision does not simply reject the music industry’s suit against Cox, it nukes it from orbit.
Cox, moreover, is the mostrecent of at least three decisions where the Court showed similarly broad skepticism of lawsuits or statutes seeking to regulate the internet.
The Supreme Court is an internet-based company’s best friend
The most striking thing about Thomas’s majority opinion in Cox is its breadth. Cox does not simply reject this one lawsuit, it cuts off a wide swath of copyright suits against internet service providers.
Advertisement
Thomas argues that, in order to prevail in Cox, the music industry plaintiffs would have needed to show that Cox “intended” for its customers to use its service for copyright infringement. To overcome this hurdle, the plaintiffs would have needed to show either that internet service providers “promoted and marketed their [service] as a tool to infringe copyrights” or that the only viable use of the internet is to illegally download copyrighted music.
Thomas also adds that the mere fact that Cox may have known that some of its users were illegally pirating copyrighted material is not enough to hold them liable for that activity.
As a legal matter, this very broad holding is dubious. As Sotomayor argues in a separate opinion, Congress enacted a law in 1998 which creates a safe harbor for some ISPs that are sued for copyright infringement by their customers. Under that 1998 law, the lawsuit fails if the ISP “adopted and reasonably implemented” a system to terminate repeat offenders of federal copyright law.
The fact that this safe harbor exists suggests that Congress believed that ISPs which do not comply with its terms may be sued. But Thomas’s opinion cuts off many lawsuits against defendants who do not comply with the safe harbor provision.
Advertisement
Still, while lawyers can quibble about whether Thomas or Sotomayor have the best reading of federal law, Thomas’s opinion was joined by a total of seven justices. And it is consistent with the Court’s previous decisions seeking to protect the internet from lawsuits and statutes that could undermine its ability to function.
In Twitter v. Taamneh (2023), a unanimous Supreme Court rejected a lawsuit seeking to hold social media companies liable for overseas terrorist activity. Twitter arose out of a federal law permitting suits against anyone “who aids and abets, by knowingly providing substantial assistance” to certain acts of “international terrorism.” The plaintiffs in Twitter claimed that social media companies were liable for an ISIS attack that killed 39 people in Istanbul, because ISIS used those companies’ platforms to post recruitment videos and other content.
Thomas also wrote the majority opinion in Twitter, and his opinion in that case mirrors the Cox decision’s view that internet companies generally should not be held responsible for bad actors who use their products. “Ordinary merchants,” Thomas wrote in Twitter, typically should not “become liable for any misuse of their goods and services, no matter how attenuated their relationship with the wrongdoer.”
Indeed, several key justices are so protective of the internet — or, at least, so cautious about interfering with it — that they’ve taken a libertarian approach to internet companies even when their own political party wants to control online discourse.
Advertisement
In Moody v. Netchoice (2024) the Court considered two state laws, one from Texas and one from Florida, that sought to force social media companies to publish conservative and Republican voices that those companies had allegedly banned or otherwise suppressed. As Texas’s Republican Gov. Greg Abbott said of his state’s law, it was enacted to stop a supposedly “dangerous movement by social media companies to silence conservative viewpoints and ideas.”
Both laws were blatantly unconstitutional. The First Amendment does not permit the government to force Twitter or Facebook to unban someone for the same reason the government cannot force a newspaper to publish op-eds disagreeing with its regular columnists. As the Court held in Miami Herald Publishing Co. v. Tornillo (1974), media outlets have an absolute right to determine “the choice of material” that they publish.
After Moody reached the Supreme Court, however, the justices uncovered a procedural flaw in the plaintiffs’ case that should have required them to send the case back down to the lower courts without weighing in on whether the two state laws are constitutional. Yet, while the Court did send the case back down, it did so with a very pointed warning that the US Court of Appeals for the Fifth Circuit, which had backed Texas’s law, “was wrong.”
Six justices, including three Republicans, joined a majority opinion leaving no doubt that the Texas and Florida laws violate the First Amendment. They protected the sanctity of the internet, even when it was procedurally improper for them to do so.
Advertisement
This Supreme Court isn’t normally so protective of institutions
One reason why the Court’s hands-off-the-internet approach in Cox, Twitter, and Moody is so remarkable is that the Supreme Court’s current majority rarely shows such restraint in other cases, at least when those cases have high partisan or ideological stakes.
In two recent decisions — Mahmoud v. Taylor (2025) and Mirabelli v. Bonta (2026) — for example, the Court’s Republican majority imposed onerous new burdens on public schools, which appear to be designed to prevent those schools from teaching a pro-LGBTQ viewpoint to students whose parents find gay or trans people objectionable. I’ve previouslyexplained why public schools will struggle to comply with Mahmoud and Mirabelli, and why many might find compliance impossible. Neither opinion showed even a hint of the caution that the Court displayed in Cox and similar cases.
Similarly, in Medina v. Planned Parenthood (2025), the Court handed down a decision that is likely to render much of federal Medicaid law unenforceable. If taken seriously, Medinaoverrules decades of Supreme Court decisions shaping the rights of about 76 million Medicaid patients, including a decision the Court handed down as recently as 2023 — though it remains to be seen if the Court’s Republican majority will apply Medina’s new rule in a case that doesn’t involve an abortion provider.
Advertisement
The Court’s Republican majority, in other words, is rarely cautious. And it is often willing to throw important American institutions such as the public school system or the US health care system into turmoil, especially in highly ideological cases.
But this Court does appear to hold the internet in the same high regard that it holds religious conservatives and opponents of abortion. And that means that the internet is one institution that these justices will protect.
You must be logged in to post a comment Login