Connect with us

Technology

FunPlus opens Studio Ellipsis to make cross-platform games in Lisbon

Published

on

FunPlus opens Studio Ellipsis to make cross-platform games in Lisbon

FunPlus, a Swiss company that has become one of the world’s largest independent game companies, has opened its new Studio Ellipsis in Lisbon, Portugal.

Lisbon is become a new hub for games, and this studio will be a creative hub focused on expanding FunPlus’s intellectual property (IP) across a variety of entertainment platforms.

Led by industry veteran Alexandre Amancio, Studio Ellipsis’ ambition is to create entertainment experiences that cater to the evolving expectations of modern audiences.

Through a combination of cross-platform game development and transmedia storytelling, the studio will bring FunPlus’ IP to life in new ways, inviting fans to explore fresh lore, characters, and stories. In the coming months, the studio will unveil additional projects that expand FunPlus’ portfolio of worlds, as well as original IPs designed for cross-medium development.

Advertisement

Amancio, who joined FunPlus in 2023, brings his expertise in creative direction, narrative design and IP development to lead this exciting new venture.

“We are witnessing a transformation in how audiences engage with entertainment. Fans are no longer passive consumers; they are explorers, seeking immersive experiences across a wide range of platforms,” said Amancio, head of studio and senior vice president of world building & IP strategy at FunPlus, in a statement. “Studio Ellipsis is our response to this shift, creating new opportunities for fans to navigate deeper into the worlds they love, while also crafting original IPs that inspire and captivate, empowering them to become part of the journey.”

This marks a significant step forward for FunPlus, already a well-established developer and publisher of mobile games, with the goal of becoming a leader in the broader interactive entertainment space, building on this vision of world-building and IP expansion.

The Lisbon-based studio, founded by a core team of highly experienced professionals, will spearhead FunPlus’ efforts to deliver rich stories and characters through multiple entertainment mediums, allowing fans to explore these worlds from fresh perspectives.

Advertisement

“The launch of Studio Ellipsis marks a pivotal moment in FunPlus’ journey to become a global powerhouse in entertainment, extending far beyond our roots in the strategy genre,” said Chris Petrovic, chief business officer at FunPlus, in a statement. “By establishing a dedicated studio in a vibrant city like Lisbon, we are not only expanding our existing worlds but also laying the foundation for new IPs that can thrive across multiple mediums. This studio represents our commitment to evolving with our audiences, meeting their demand for immersive storytelling, and creating experiences that resonate deeply within our fans.”

Studio Ellipsis is the latest milestone in FunPlus’ strategic move to evolve beyond gaming, embracing transmedia storytelling. The first project to launch from this initiative is Sea of Conquest: Cradle of the Gods, a comic series based on the popular strategy game Sea of Conquest: Pirate War. This high-quality comic was developed by a talented team of artists and writers, with the first issue available now for free on digital platforms.

FunPlus said it chose Lisbon for Studio Ellipsis due to its burgeoning potential as a creative and technological hub, with the Portuguese capital gaining recognition as a dynamic environment for game development and digital entertainment. FunPlus’ investment in the region, coupled with its collaboration with local universities and organizations like the APVP (Associação de Produtores de Videojogos Portugueses) and The Gaming Hub by Unicorn Factory Lisboa, will foster local talent and contribute to the growth of the game development ecosystem.

Advertisement

“I am incredibly proud to welcome Studio Ellipsis from FunPlus to Lisbon, the European Capital of Innovation,” said Carlos Moedas, Mayor of Lisbon, in a statement. “Thanks to initiatives like the Unicorn Factory, our city has transformed into a global hub for innovators and unicorn companies, where some of the most groundbreaking and disruptive tech products and services are being developed. This unprecedented effort is changing lives across the city, with over 15,000 job opportunities being created in under three years. We are thrilled to include FunPlus as a new member of our exciting and diverse tech community. Welcome to Lisbon.”


Source link
Continue Reading
Advertisement
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Technology

NYT Connections: hints and answers for Thursday, October 17

Published

on

NYT Connections: hints and answers for Monday, October 7
New York Times' Connection puzzle open in the NYT Games app on iOS.
Sam Hill / Digital Trends

Connections is the latest puzzle game from the New York Times. The game tasks you with categorizing a pool of 16 words into four secret (for now) groups by figuring out how the words relate to each other. The puzzle resets every night at midnight and each new puzzle has a varying degree of difficulty. Just like Wordle, you can keep track of your winning streak and compare your scores with friends.

Some days are trickier than others. If you’re having a little trouble solving today’s Connections puzzle, check out our tips and hints below. And if you still can’t get it, we’ll tell you today’s answers at the very end.

How to play Connections

In Connections, you’ll be shown a grid containing 16 words — your objective is to organize these words into four sets of four by identifying the connections that link them. These sets could encompass concepts like titles of video game franchises, book series sequels, shades of red, names of chain restaurants, etc.

There are generally words that seem like they could fit multiple themes, but there’s only one 100% correct answer. You’re able to shuffle the grid of words and rearrange them to help better see the potential connections.

Each group is color-coded. The yellow group is the easiest to figure out, followed by the green, blue, and purple groups.

Advertisement

Pick four words and hit Submit. If you’re correct, the four words will be removed from the grid and the theme connecting them will be revealed. Guess incorrectly and it’ll count as a mistake. You only have four mistakes available until the game ends.

Hints for today’s Connections

We can help you solve today’s Connection by telling you the four themes. If you need more assistance, we’ll also give you one word from each group below.

Today’s themes

  • GRASSY AREA
  • DEAL WITH
  • MOVIES WITH “S” REMOVED
  • ___ LAW

One-answer reveals

  • GRASSY AREA – GREEN
  • DEAL WITH – ADDRESS
  • MOVIES WITH “S” REMOVED – CAR
  • ___ LAW – CRIMINAL
New York Times Connection game logo.
New York Times

Today’s Connections answers

Still no luck? That’s OK. This puzzle is designed to be difficult.  If you just want to see today’s Connections answer, we’ve got you covered below:

Connections grids vary widely and change every day. If you couldn’t solve today’s puzzle, be sure to check back in tomorrow.






Source link

Continue Reading

Technology

SpaceX is suing the California Coastal Commission for not letting it launch more rockets

Published

on

SpaceX is suing the California Coastal Commission for not letting it launch more rockets

Last week, the California Coastal Commission rejected a plan for SpaceX to launch up to 50 rockets this year at the Vandenberg Space Force Base in Santa Barbara County. The company responded yesterday with a lawsuit, alleging that the state agency’s denial was overreaching its authority and discriminating against its CEO.

The Commission’s goal is to protect California’s coasts and beaches, as well as the animals living in them. The agency has control over private companies’ requests to use the state coastline, but it can’t deny activities by federal departments. The denied launch request was actually made by the US Space Force on behalf of SpaceX, asking that the company be allowed to launch 50 of its Falcon 9 rockets, up from 36.

While the commissioners did raise about SpaceX CEO Elon Musk’s political screed and the spotty safety records at his companies during their review of the launch request, the assessment focused on the relationship between SpaceX and Space Force. The Space Force case is that “because it is a customer of — and reliant on — SpaceX’s launches and satellite network, SpaceX launches are a federal agency activity,” the Commission stated. “However, this does not align with how federal agency activities are defined in the Coastal Zone Management Act’s regulations or the manner in the Commission has historically implemented those regulations.” The California Coastal Commission claimed that at least 80 percent of the SpaceX rockets contain payloads for Musk’s Starlink company rather than payloads for government clients.

The SpaceX suit filed with the Central District of California court is seeking an order to designate the launches as federal activity, which would cut the Commission’s oversight out of its future launch plans.

Advertisement

Source link

Continue Reading

Technology

Microsoft’s Differential Transformer cancels attention noise in LLMs

Published

on

Microsoft's Differential Transformer cancels attention noise in LLMs

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


Improving the capabilities of large language models (LLMs) in retrieving in-prompt information remains an area of active research that can impact important applications such as retrieval-augmented generation (RAG) and in-context learning (ICL).

Microsoft Research and Tsinghua University researchers have introduced Differential Transformer (Diff Transformer), a new LLM architecture that improves performance by amplifying attention to relevant context while filtering out noise. Their findings, published in a research paper, show that Diff Transformer outperforms the classic Transformer architecture in various settings.

Transformers and the “lost-in-the-middle” phenomenon

The Transformer architecture is the foundation of most modern LLMs. It uses an attention mechanism to weigh the importance of different parts of the input sequence when generating output. The attention mechanism employs the softmax function, which normalizes a vector of values into a probability distribution. In Transformers, the softmax function assigns attention scores to different tokens in the input sequence.

Advertisement

However, studies have shown that Transformers struggle to retrieve key information from long contexts.

“We began by investigating the so-called ‘lost-in-the-middle’ phenomenon,” Furu Wei, Partner Research Manager at Microsoft Research, told VentureBeat, referring to previous research findings that showed that LLMs “do not robustly make use of information in long input contexts” and that “performance significantly degrades when models must access relevant information in the middle of long contexts.”

Wei and his colleagues also observed that some LLM hallucinations, where the model produces incorrect outputs despite having relevant context information, correlate with spurious attention patterns.

“For example, large language models are easily distracted by context,” Wei said. “We analyzed the attention patterns and found that the Transformer attention tends to over-attend irrelevant context because of the softmax bottleneck.”

Advertisement

The softmax function used in Transformer’s attention mechanism tends to distribute attention scores across all tokens, even those that are not relevant to the task. This can cause the model to lose focus on the most important parts of the input, especially in long contexts.

“Previous studies indicate that the softmax attention has a bias to learn low-frequency signals because the softmax attention scores are restricted to positive values and have to be summed to 1,” Wei said. “The theoretical bottleneck renders [it] such that the classic Transformer cannot learn sparse attention distributions. In other words, the attention scores tend to flatten rather than focusing on relevant context.”

Differential Transformer

Differential transformer
Differential Transformer (source: arXiv)

To address this limitation, the researchers developed the Diff Transformer, a new foundation architecture for LLMs. The core idea is to use a “differential attention” mechanism that cancels out noise and amplifies the attention given to the most relevant parts of the input.

The Transformer uses three vectors to compute attention: query, key, and value. The classic attention mechanism performs the softmax function on the entire query and key vectors.

The proposed differential attention works by partitioning the query and key vectors into two groups and computing two separate softmax attention maps. The difference between these two maps is then used as the attention score. This process eliminates common noise, encouraging the model to focus on information that is pertinent to the input.

Advertisement

The researchers compare their approach to noise-canceling headphones or differential amplifiers in electrical engineering, where the difference between two signals cancels out common-mode noise.

While Diff Transformer involves an additional subtraction operation compared to the classic Transformer, it maintains efficiency thanks to parallelization and optimization techniques.

“In the experimental setup, we matched the number of parameters and FLOPs with Transformers,” Wei said. “Because the basic operator is still softmax, it can also benefit from the widely used FlashAttention cuda kernels for acceleration.”

In retrospect, the method used in Diff Transformer seems like a simple and intuitive solution. Wei compares it to ResNet, a popular deep learning architecture that introduced “residual connections” to improve the training of very deep neural networks. Residual connections made a very simple change to the traditional architecture yet had a profound impact.

Advertisement

“In research, the key is to figure out ‘what is the right problem?’” Wei said. “Once we can ask the right question, the solution is often intuitive. Similar to ResNet, the residual connection is an addition, compared with the subtraction in Diff Transformer, so it wasn’t immediately apparent for researchers to propose the idea.”

Diff Transformer in action

The researchers evaluated Diff Transformer on various language modeling tasks, scaling it up in terms of model size (from 3 billion to 13 billion parameters), training tokens, and context length (up to 64,000 tokens).

Their experiments showed that Diff Transformer consistently outperforms the classic Transformer architecture across different benchmarks. A 3-billion-parameter Diff Transformer trained on 1 trillion tokens showed consistent improvements of several percentage points compared to similarly sized Transformer models.

Further experiments with different model sizes and training dataset sizes confirmed the scalability of Diff Transformer. Their findings suggest that in general, Diff Transformer requires only around 65% of the model size or training tokens needed by a classic Transformer to achieve comparable performance.

Advertisement
Diff Transformer performance
The Diff Transformer is more efficient than the classic Transformer in terms of both parameters and train tokens (source: arXiv)

The researchers also found that Diff Transformer is particularly effective in using increasing context lengths. It showed significant improvements in key information retrieval, hallucination mitigation, and in-context learning.

While the initial results are promising, there’s still room for improvement. The research team is working on scaling Diff Transformer to larger model sizes and training datasets. They also plan to extend it to other modalities, including image, audio, video, and multimodal data.

The researchers have released the code for Diff Transformer, implemented with different attention and optimization mechanisms. They believe the architecture can help improve performance across various LLM applications.

“As the model can attend to relevant context more accurately, it is expected that these language models can better understand the context information with less in-context hallucinations,” Wei said. “For example, for the retrieval-augmented generation settings (such as Bing Chat, Perplexity, and customized models for specific domains or industries), the models can generate more accurate responses by conditioning on the retrieved documents.”


Source link
Continue Reading

Technology

Zepto eyes $100M from Indian offices in third funding in 6 months

Published

on

Zepto founders

Zepto is in advanced stages of talks to raise $100 million in new investment, its third in the last six months, as the leading Indian quick commerce startup looks to rope in more domestic investors, sources familiar with the talks told TechCrunch.

The Mumbai-headquartered startup, which delivers grocery items and office stationery to customers’ doorsteps in 10 minutes in multiple Indian cities, is raising the new investment from Indian family offices and high net worth individuals.

Motilal Oswal, the asset management giant that earlier invested $40 million in Zepto, is running the mandate for the new funding deliberation, the sources said, requesting anonymity as the matter is private. The financial services firm has already received commitments for more than half of the allocation, according to another source familiar with the situation.

The new investment values Zepto at a $5 billion post-money valuation, the same value at which it recently closed a $340 million financing round in August. Zepto has raised more than $1 billion in the last six months and all of it remains in its bank.

Advertisement

Zepto is planning to go public next year and the new fundraise is aimed at expanding the base of domestic investors on its cap table. Zepto counts Avra, Lightspeed, Nexus, StepStone Group, YC Continuity, Glade Brook and Contrary among its backers.

Even as quick commerce startups are retreating, consolidating or shutting down in many parts of the world, the model is showing encouraging signs in India. Quick commerce startups are on track to do a sale of more than $6 billion this year, according to TechCrunch’s analysis.

In response to the fast rise of quick commerce, which is increasingly shaping the consumer behavior in India, many e-commerce incumbents — including Flipkart, Myntra and Nykaa have been forced to scramble ways to lower the time they take to deliver items to their customers.

Shares of Dmart, which runs one of the largest brick-and-mortar retail chains in India, fell this week after the firm confirmed that it was losing some business to quick commerce startups.

Advertisement

“We believe Quick Commerce players are expanding cities, categories, SKUs, AOVs and discounts, and creating parallel commerce for convenience-seeking customers,” analysts at Morgan Stanley wrote in a note this week.

Zepto – which competes with Zomato-owned BlinkIt, Prosus-backed Swiggy’s Instamart, and Tata’s BigBasket – has grown its annualized net runrate considerably in recent months, according to sources and an internal document reviewed by TechCrunch.

Zepto co-founder and chief executive Aadit Palicha told a group of investors in August that the startup projects to grow at 150% in the next 12 months, TechCrunch earlier reported.

Source link

Advertisement

Continue Reading

Technology

DJI says US customs is blocking its drone imports

Published

on

DJI says US customs is blocking its drone imports

DJI tells The Verge that it currently cannot freely import all of its drones into the United States — and that its latest consumer drone, the Air 3S, won’t currently be sold at retail as a result.

“A customs-related issue is hindering DJI’s ability to import select drones into the United States.”

That’s not because the United States has suddenly banned DJI drones — rather, DJI believes the import restrictions are “part of a broader initiative by the Department of Homeland Security to scrutinize the origins of products, particularly in the case of Chinese-made drones,” according to DJI.

DJI recently sent a letter to distributors with one possible reason why DHS is stopping some of its drones: the company says US Customs and Border Protection is citing the Uyghur Forced Labor Prevention Act (UFLPA) as justification for blocking the imports. In the letter, which has been floating around drone sites and Reddit for several days, DJI claims it doesn’t use any forced labor to manufacture drones.

Advertisement

Reuters reported on the letter earlier today; DJI spokesperson Daisy Kong confirmed the letter’s legitimacy to The Verge as well.

In a just-published official blog post, DJI is calling this all a “misunderstanding,” and writes that it’s currently sending documentation to US Customs to prove that it doesn’t manufacture anything in the Xinjiang region of China where Uyghurs have been forcibly detained, that it complies with US law and international standards, and that US retailers have audited its supply chain. DJI claims it manufacturers all its products in Shenzhen or Malaysia.

US Customs and Border Protection didn’t reply to a request for comment.

While the US House of Representatives did pass a bill that would effectively ban DJI drones from being imported into the US, that ban would also need to pass the Senate. Last we checked, the Senate had removed the DJI ban from its version of the must-pass 2025 National Defense Authorization Act (though it did get reintroduced as an amendment and could potentially still make it into the final bill).

Advertisement

DJI says the “customs-related issue” has “primarily impacted” the company’s enterprise and agricultural drones, but has also now “limited us from offering the Air 3S to US customers beyond DJI.com.”

“We are actively working with U.S. Customs and Border Protection to resolve this issue and remain hopeful for a swift resolution,” writes DJI.

The US government has cracked down on DJI drones before, but not in a way that would keep stores from buying them, consumers from purchasing them, or individual pilots from flying them in the United States. Primarily, the US Department of Commerce’s “entity list” keeps US companies from exporting their technology to the Chinese company, and the US has sometimes restricted certain government entities from purchasing new DJI drones.

Even if DJI imports do get banned by Congress, the proposed law suggests existing owners could still use their drones — but the FCC could no longer authorize DJI gadgets with radios for use in the United States, which would effectively block all imports.

Advertisement

Source link

Continue Reading

Technology

Rampant ransom payments highlight need for urgent action on cyber resiliency

Published

on

Rampant ransom payments highlight need for urgent action on cyber resiliency

A whopping 69% of organizations have reported paying ransoms this year, according to research by Cohesity, with 46% handing over a quarter of a million dollars or more to cybercriminals. It is hardly the picture of resiliency that is often painted by industry. Clearly, there is a disconnect between cyber resiliency policy and operational capability that urgently needs addressing. 

With the advent of Ransomware-as-a-Service platforms and the current global geopolitical situation, organizations face a huge existential threat through destructive cyber attacks that could put them out of business. This gap between confidence and capability needs to be addressed, but in order to do so, those organizations need to recognize there is a problem in the first place.

Source link

Advertisement

Continue Reading

Trending

Copyright © 2024 WordupNews.com