Connect with us

Technology

Can you build a startup without sacrificing your mental health? Bonobos founder Andy Dunn thinks so

Published

on

Can you build a startup without sacrificing your mental health? Bonobos founder Andy Dunn thinks so

Bonobos founder Andy Dunn is back in the builder’s seat, working on an in-person social media platform called Pie. But the biggest lessons he learned from his $310 million Bonobos exit don’t have as much to do with entrepreneurship as they do with staying sane.

When Dunn was in college, he was diagnosed with bipolar disorder, but he didn’t get adequate treatment until 2016, when he was hospitalized during a manic episode for the second time.

“The manic state is just a disaster — that’s like being in psychosis, you know, messianic delusions. … You can’t accomplish anything in that state,” Dunn said onstage at TechCrunch Disrupt 2024. The incident was enough of a wakeup call that 16 years after his initial diagnosis, he finally took his condition seriously and started going to therapy, taking medication, and monitoring his sleep.

Dunn wrote a book called “Burn Rate: Launching a Startup and Losing My Mind,” documenting the parallel processes of building Bonobos and figuring out how to accept and then manage his bipolar disorder. But the lessons from the book are applicable for entrepreneurs beyond those with Dunn’s diagnosis.

Advertisement

“We all have mental health, right? It doesn’t take a diagnosis to suffer or struggle,” he said.

Still, entrepreneurs tend to report a higher incidence of mental health issues throughout their lives than the average person.

“There’s definitely a correlation between neurodivergence and creativity,” he said. “I don’t know if entrepreneurship attracts people who are neurodivergent, or it makes them more neurodivergent, but there’s certainly some kind of a virtuous and sometimes unvirtuous cycle there.”

That interplay between mental illness and entrepreneurship is even more palpable for Dunn, who says that the state of hypomania — the high of bipolar disorder, as opposed to the crushing depressive periods — could be conducive to running a startup.

Advertisement

“Here are the DSM criteria for [hypomania]: rapid speech, increased ideation, grandiosity, decreased need for sleep, ability to be more creative … more or less the central casting traits of an entrepreneur having a good day,” he said. “I was able to benefit from that, but the price that I paid was ultimately too high. I was depressed with suicidal ideation for between two to three months a year, and then ultimately, the full mania and psychosis came raging back, which was catastrophic.”

But even in an astonishingly productive hypomanic state, Dunn doesn’t think he was the greatest boss or colleague. He said that one of the side effects of hypomania is becoming irritable when people disagree with you, which is essential to running a collaborative company. Now, running Pie, Dunn welcomes this debate.

“When we disagree, let’s go, let’s disagree even more, because we’re going to be able to make a better decision coming out of it,” he said.

While discussions about mental health have become more mainstream, founders still worry about the stigma of revealing a diagnosis to colleagues and investors. Dunn is an adviser to the Founder Mental Health Pledge, which asks investors to advocate for the mental health of the founders they invest in. But he’s not naive that the stigma is still present — when founders ask for his advice about when to disclose a mental health concern to investors, he says to wait six weeks until after the deal closes.

Advertisement

“We raised $125 million at Bonobos — would you give $125 million to someone who can either be psychotic or catatonically depressed?” Dunn said. “But also, you shouldn’t do what I did and hide it, because then, you know, when there is a crisis, it’s a surprise.”

Dunn’s discussion of his experience with bipolar disorder doesn’t seem to have hurt his ability to fundraise, though — Pie just raised a $11.5 million Series A. As public as he is about the severity of bipolar disorder, he’s also open about how his regimen of therapy and medication have helped him live a stable life.

“I treat bipolar as my Olympic regimen. For Simone Biles, it’s how to navigate and win the gold,” he said. “For me, the gold medal is to die of something else, right? Because the horrible thing about bipolar is the suicide rate.”

Now, the next test for Dunn is to do the work it takes to make Pie a success without sacrificing his stability.

Advertisement

“Here’s the challenge,” Dunn said. “We want to have good mental health, and we want our teams to have balance in mental health, and yet a 40-hour workweek doesn’t cut it. You can’t change the world with a bunch of people working 40 hours a week.”

One way Dunn has navigated this fine line is to be open with job candidates about what the work will entail, as well as how he will support them with company benefits.

“I have a new spiel I give when recruiting, which is, this is a 50- to 60-hour-per-week job, and in return, you’re going to get two awesome things. One, you’re going to learn more and grow more and develop more. Two, you’ve got equity,” he said.

Like any startup leader, Dunn wants his team to work hard, but he believes there’s a way to do that without it backfiring. In describing his time at Bonobos in “Burn Rate,” Dunn writes, “I came to a classic mistaken conclusion of an immature startup founder: if the business isn’t working, then we must not be working hard enough.”

Advertisement

There’s no denying that founders need to work hard — but taking care of oneself is part of that hard work.

Source link

Continue Reading
Advertisement
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Technology

Organizations face a critical disconnect between their data protection protocols and actual practices

Published

on

Organizations face a critical disconnect between their data protection protocols and actual practices

From streamlining operations to automating complex processes, AI has revolutionized how organizations approach tasks – however, as the technology becomes more prevalent, organizations are discovering the rush to embrace AI may come with unintended consequences.

A report by Swimlane reveals while AI offers tremendous benefits, its adoption has outpaced many companies’ ability to safeguard sensitive data. As businesses deeply integrate AI into their operations, they must also contend with the associated risks, including data breaches, compliance lapses, and security protocol failures.

Source link

Continue Reading

Technology

Agatha All Along creator wrote post-credits scenes for the Marvel series that weren’t used

Published

on

Agatha All Along creator wrote post-credits scenes for the Marvel series that weren’t used

Agatha All Along is one of the most widely liked titles that Marvel Studios has released in, well, a while. The WandaVision spin-off premiered in late September and did a lot to win over even some of the Marvel Cinematic Universe’s more skeptical fans across its nine episodes. While Agatha All Along does set up some exciting future possibilities for several of its characters, though, its finale doesn’t include a single post-credits scene.

According to Agatha All Along creator Jac Schaeffer, that isn’t because she didn’t have any ideas for one. When asked about the series’ lack of a post-credits tag, Schaeffer told Variety, “That’s a Marvel decision. I know nothing more than that.” The writer and showrunner went on to reveal that she actually wrote multiple potential post-credits scenes for Agatha All Along, none of which were ultimately used because of behind-the-scenes decision-making by Marvel.

“I wrote a number of tags, because you always do on every Marvel everything. I love writing tags. I think some of my best writing is in the tags that were never made. I should have a little binder of my tags. They’re so fun to write, because you’re writing the promise without having to deliver on anything. They’re the best,” Schaeffer commented. “But there are so many things that factor into those. And I was told that we weren’t going to do a tag on this show.”

Ghost Agatha floats next to Billy Maximoff in Agatha All Along.
Marvel Studios

Schaeffer, of course, has experience writing post-credits scenes. After all, her previous Marvel show, 2021’s WandaVision, ends with a brief scene that directly sets up the events of 2022’s Doctor Strange in the Multiverse of Madness and specifically Wanda Maximoff’s (Elizabeth Olsen) villainous actions throughout it. The absence of a similar post-credits tag at the end of Agatha All Along, therefore, came as a shock for a number of reasons.

Marvel’s thinking behind this absence may have to remain a mystery for the time being to viewers. Fortunately, Agatha All Along does go out of its way to set up more adventures for, at the very least, Billy Maximoff (Joe Locke). The character teams up with the ghost of Agatha Harkness (Kathryn Hahn) to go searching for his missing brother’s soul in the Agatha All Along finale, but fans will have to wait to learn more about Marvel’s plans for Agatha, Billy, and the other members of the Maximoff family.

Advertisement

Agatha All Along is streaming now on Disney+.






Source link

Advertisement
Continue Reading

Technology

Bose QuietComfort headphones return to all-time low

Published

on

Bose QuietComfort headphones return to all-time low

Bose is without a doubt one of the top contenders for active noise-cancelling headphones with its QuietComfort model, and right now Amazon has a pretty good deal on them worth looking into.

Normally the QuietComfort headphones would retail for $349 at their full price. However, Amazon currently has them on sale for $199. This is a great price for these headphones and it’s the lowest we’ve seen them. It’s also the lowest price for these headphones in the last 30 days. With the average price sitting at around $285.20.

Bose QuietComfort Price History

The main feature here is the active noise cancellation. It’s part of why these have been so popular over time. Because you can wear them and drown out almost everything so you can enjoy your music or whatever else you’re listening to. They’re great for travel in this sense. If you fly a lot, then these are perfect for taking on flights to make sure you don’t have to hear everyone else on the plane.

They also sound pretty good. Battery life is great too, lasting up to 24 hours on a single charge. While that isn’t the longest battery life we’ve ever seen, it’s more than enough for most people and will get you through a few days before you need to plug them in. A feature that we really love is that they fold up and they come with a protective travel case. Whether you use the case or not, they’re easily packable. Bose also sells these in several different colors. This includes Cypress Green, Moonstone Blue, Black, Blue Dusk, Chilled Lilac, Sandstone, and White.

Advertisement

Plus, all of these colors are on sale for the discounted $199 price tag. There’s an EQ in the companion app as well if you want to tune the sound to your personal liking.

Buy at Amazon

Source link

Advertisement
Continue Reading

Technology

FBI warns voters about inauthentic videos relating to election security

Published

on

FBI warns voters about inauthentic videos relating to election security

The FBI issued a statement on Saturday about deceptive videos circulating ahead of the election, saying it’s aware of two such videos “falsely claiming to be from the FBI relating to election security.” That includes one claiming the FBI had “apprehended three linked groups committing ballot fraud,” and one about Kamala Harris’ husband. Both depict false content, the FBI said.

Disinformation — including the spread of political deepfakes and other forms of misleading videos and imagery — has been a major concern in the leadup to the US presidential election. In its statement posted on X, the FBI added:

Election integrity is among our highest priorities, and the FBI is working closely with state and local law enforcement partners to respond to election threats and protect our communities as Americans exercise their right to vote. Attempts to deceive the public with false content about FBI operations undermines our democratic process and aims to erode trust in the electoral system.

Just a day earlier, the along with the Office of the Director of National Intelligence (ODNI) and the Cybersecurity and Infrastructure Security Agency (CISA) said they’d traced two other videos back to “Russian influence actors,” including one “that falsely depicted individuals claiming to be from Haiti and voting illegally in multiple counties in Georgia.”

Source link

Advertisement
Continue Reading

Technology

Why multi-agent AI tackles complexities LLMs can’t

Published

on

Why multi-agent AI tackles complexities LLMs can't

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


The introduction of ChatGPT has brought large language models (LLMs) into widespread use across both tech and non-tech industries. This popularity is primarily due to two factors:

  1. LLMs as a knowledge storehouse: LLMs are trained on a vast amount of internet data and are updated at regular intervals (that is, GPT-3, GPT-3.5, GPT-4, GPT-4o, and others);
  1.  Emergent abilities: As LLMs grow, they display abilities not found in smaller models.

Does this mean we have already reached human-level intelligence, which we call artificial general intelligence (AGI)? Gartner defines AGI as a form of AI that possesses the ability to understand, learn and apply knowledge across a wide range of tasks and domains. The road to AGI is long, with one key hurdle being the auto-regressive nature of LLM training that predicts words based on past sequences. As one of the pioneers in AI research, Yann LeCun points out that LLMs can drift away from accurate responses due to their auto-regressive nature. Consequently, LLMs have several limitations:

  • Limited knowledge: While trained on vast data, LLMs lack up-to-date world knowledge.
  • Limited reasoning: LLMs have limited reasoning capability. As Subbarao Kambhampati points out LLMs are good knowledge retrievers but not good reasoners.
  • No Dynamicity: LLMs are static and unable to access real-time information.

To overcome LLM’s challenges, a more advanced approach is required. This is where agents become crucial.

Agents to the rescue

The concept of intelligent agent in AI has evolved over two decades, with implementations changing over time. Today, agents are discussed in the context of LLMs. Simply put, an agent is like a Swiss Army knife for LLM challenges: It can help us in reasoning, provide means to get up-to-date information from the Internet (solving dynamicity issues with LLM) and can achieve a task autonomously. With LLM as its backbone, an agent formally comprises tools, memory, reasoning (or planning) and action components.

Components of an agent (Image Credit: Lilian Weng)

Components of AI agents

  • Tools enable agents to access external information — whether from the internet, databases, or APIs — allowing them to gather necessary data.
  • Memory can be short or long-term. Agents use scratchpad memory to temporarily hold results from various sources, while chat history is an example of long-term memory.
  • The Reasoner allows agents to think methodically, breaking complex tasks into manageable subtasks for effective processing.
  • Actions: Agents perform actions based on their environment and reasoning, adapting and solving tasks iteratively through feedback. ReAct is one of the common methods for iteratively performing reasoning and action.

What are agents good at?

Agents excel at complex tasks, especially when in a role-playing mode, leveraging the enhanced performance of LLMs. For instance, when writing a blog, one agent may focus on research while another handles writing — each tackling a specific sub-goal. This multi-agent approach applies to numerous real-life problems.

Role-playing helps agents stay focused on specific tasks to achieve larger objectives, reducing hallucinations by clearly defining parts of a prompt — such as role, instruction and context. Since LLM performance depends on well-structured prompts, various frameworks formalize this process. One such framework, CrewAI, provides a structured approach to defining role-playing, as we’ll discuss next.

Advertisement

Multi agents vs single agent

Take the example of retrieval augmented generation (RAG) using a single agent. It’s an effective way to empower LLMs to handle domain-specific queries by leveraging information from indexed documents. However, single-agent RAG comes with its own limitations, such as retrieval performance or document ranking. Multi-agent RAG overcomes these limitations by employing specialized agents for document understanding, retrieval and ranking.

In a multi-agent scenario, agents collaborate in different ways, similar to distributed computing patterns: sequential, centralized, decentralized or shared message pools. Frameworks like CrewAI, Autogen, and langGraph+langChain enable complex problem-solving with multi-agent approaches. In this article, I have used CrewAI as the reference framework to explore autonomous workflow management.

Workflow management: A use case for multi-agent systems

Most industrial processes are about managing workflows, be it loan processing, marketing campaign management or even DevOps. Steps, either sequential or cyclic, are required to achieve a particular goal. In a traditional approach, each step (say, loan application verification) requires a human to perform the tedious and mundane task of manually processing each application and verifying them before moving to the next step.

Each step requires input from an expert in that area. In a multi-agent setup using CrewAI, each step is handled by a crew consisting of multiple agents. For instance, in loan application verification, one agent may verify the user’s identity through background checks on documents like a driving license, while another agent verifies the user’s financial details.

Advertisement

This raises the question: Can a single crew (with multiple agents in sequence or hierarchy) handle all loan processing steps? While possible, it complicates the crew, requiring extensive temporary memory and increasing the risk of goal deviation and hallucination. A more effective approach is to treat each loan processing step as a separate crew, viewing the entire workflow as a graph of crew nodes (using tools like langGraph) operating sequentially or cyclically.

Since LLMs are still in their early stages of intelligence, full workflow management cannot be entirely autonomous. Human-in-the-loop is needed at key stages for end-user verification. For instance, after the crew completes the loan application verification step, human oversight is necessary to validate the results. Over time, as confidence in AI grows, some steps may become fully autonomous. Currently, AI-based workflow management functions in an assistive role, streamlining tedious tasks and reducing overall processing time.

Production challenges

Bringing multi-agent solutions into production can present several challenges.

  • Scale: As the number of agents grows, collaboration and management become challenging. Various frameworks offer scalable solutions — for example, Llamaindex takes event-driven workflow to manage multi-agents at scale.
  • Latency: Agent performance often incurs latency as tasks are executed iteratively, requiring multiple LLM calls. Managed LLMs (like GPT-4o) are slow because of implicit guardrails and network delays. Self-hosted LLMs (with GPU control) come in handy in solving latency issues.
  • Performance and hallucination issues: Due to the probabilistic nature of LLM, agent performance can vary with each execution. Techniques like output templating (for instance, JSON format) and providing ample examples in prompts can help reduce response variability. The problem of hallucination can be further reduced by training agents.

Final thoughts

As Andrew Ng points out, agents are the future of AI and will continue to evolve alongside LLMs. Multi-agent systems will advance in processing multi-modal data (text, images, video, audio) and tackling increasingly complex tasks. While AGI and fully autonomous systems are still on the horizon, multi-agents will bridge the current gap between LLMs and AGI.

Abhishek Gupta is a principal data scientist at Talentica Software.

Advertisement

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

Advertisement

You might even consider contributing an article of your own!

Read More From DataDecisionMakers


Source link
Continue Reading

Technology

CareYaya is enabling affordable home care by connecting healthcare students with elders

Published

on

CareYaya is enabling affordable home care by connecting healthcare students with elders

CareYaya, a platform that matches people who need caregivers with healthcare students, is working to disrupt the caregiving industry. The startup, which exhibited as part of the Battlefield 200 at TechCrunch Disrupt, is looking to enhance affordable in-home support, while also helping students prepare for their future healthcare careers.

The startup was founded in 2022 by Neal Shah, who came up with the idea for the startup based on his own experience as a caregiver for his wife after she became ill with cancer and various other ailments. During this time, Shah was a partner at a hedge fund and had to wind down his fund to become a full-time caregiver for two years. 

To get additional care for his wife, Shah hired college students who were studying healthcare to be caregivers for his wife. Shah learned that other families were doing the same thing informally by posting flyers at local campuses to find someone who was qualified to look after their loved one. 

“I was like, wouldn’t it be nice to just build a formal system for them to do it, where you don’t have to go to your local nursing school or your local undergrad campus and post flyers,” Shah told TechCrunch. “This is what I was doing. So we were like, if you can bring that into a formal capacity through a tech platform, you can make a big impact.” 

Advertisement

Fast-forward to 2024, and the platform now has over 25,000 students on its platform from numerous schools, including Duke University, Stanford, UC Berkeley, San Jose State, University of Texas at Austin, and more. 

Image Credits:CareYaya

CareYaya performs background checks on students who want to join the platform and then completes video-based interviews with them. On the user side, people can join the platform and then detail the type of care their loved one needs. CareYaya then matches students to families, whether it’s for one-off sessions or continuous care. After the first session, both parties can leave ratings.

The startup says it can help families save thousands of dollars on recurring senior care. While at-home care costs an average of $35 per hour in the U.S., CareYaya charges between $17 and $20 per hour.

Since the students providing the care are tech savvy, CareYaya is equipping them with AI-powered technology to recognize and track disease progression in patients with Alzheimer’s and dementia. The company recently launched an LLM (large language model) that integrates with smart glasses to gather visual data to help students provide better real-time assistance and conduct early dementia screening.

In terms of the future, CareYaya wants to explore expanding beyond the United States, as the platform has seen interest from people in places like Canada, Australia, and the United Kingdom. 

Advertisement

Source link

Continue Reading

Trending

Copyright © 2024 WordupNews.com