Jolla launches privacy-focused AI hardware

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Jolla has officially wrapped the first version of its personal server-based AI assistant. The reborn startup is building a privacy-focused AI device — aka JollaMind2which TechCrunch exclusively revealed at MWC in February.

At a live-streamed launch event on Monday, it also opened pre-orders, with the first units shipping to Europe later this year. Global pre-orders open in June, with plans to ship later this year or early next.

In the two months since we saw the first 3D-printed prototype of Jolla's AI-in-a-box, there's been a lot of buzz around other consumer-focused AI devices, such as Humane's Ai Pin and Rabbit R1. . But initial interest has faded due to poor or incomplete user experiences — and the realization that nascent AI gadgets are heavy on experience, light on utility.

The European startup behind JoolaMind2 According to CEO and co-founder Antti Sarnio, Fei is keen not to fall into that trap for its AI device. This is why they are proceeding “cautiously” – trying to avoid the pitfalls of over-promising and under-delivering.

“I believe this is one of the biggest disruptive moments for AI — integrating into our software. It's massive disruption. But basically the first approaches were rushed, and that was the problem, ” he told TechCrunch. “You should introduce software that actually works.”

The feedback is harsh, but fair in light of recent launches.

Sarnio says the team plans to ship the first few hundred units (up to 500) of the device to early adopters in Europe this fall — possibly tapping into the community of enthusiasts it has built for its Sailfish. Built around older products like Mobile OS.

Pricing for Jolla Mind2 €699 (including VAT) – so the hardware is considerably more expensive than the team originally planned. But there's also a lot more onboard RAM (16GB) and storage (1TB) than they originally budgeted for. Less good: Users will have to shell out for a monthly subscription starting at €9.99. So this is another AI device that won't be cheap.

AI agents living in a box

Jola Mind2 There are a series of AI agents designed for different use cases focused on productivity. They are designed to integrate with relevant third-party services (via APIs) to perform various functions for you — such as an email agent that can triage your inbox, and Can write and send messages. Or a contact agent that Jolla briefly demoed at MWC that can store intel about the people you interact with to put you at the top of your professional network.

Image credit: Jolla

In a video call with TechCrunch ahead of Monday's official launch, Saarnio demoed the latest version of Jolla Mind.2 – Showing some features we haven't seen before, including the aforementioned email agent; document preview and summary feature; e-signature capability for documents; and something new called “knowledge bases” (more below).

The productivity-focused features we saw demoed were working, though there were some notable latency issues. An apologetic Sarnio said demo gremlins had struck earlier in the day, causing last-minute performance issues.

Switching between agents was also manual in the demo of the chatbot interface, but he said this will be automated through AI's semantic understanding of user queries for the final product.

Planned AI agents include: a calendar agent; storage agent; work management; Message Agent (for integration with third-party messaging apps); and a “Coach agent,” which they plan to tap into third-party activity/health tracking apps and devices so the user can query the health data on their device.

The promise of private, on-device processing is a key selling point for the product. Jolla insists that user queries and data are stored on the hardware they own. Instead — for example if you use OpenAI's ChatGPT — your personal information is being collected in the cloud for commercial data mining and someone else's profit opportunities…

Privacy sounds great but obviously latency will need to be minimized. Given the productivity and convenience Jolla is shooting for the 'prosumer' use case alongside its core strategic focus on firewalling your personal data, this is doubly important.

The bottom line is that the device's onboard Circa 3BN parameter AI model (what Sarnew calls a “little language model”) can be connected to all sorts of third-party data sources. This makes user information available for further processing and scalable utility, without them having to worry about the security or integrity of their information being compromised when they use the power of AI. Is.

For questions where Jola mind2As if the local AI model might not be enough, the system will give users the option to send queries 'off-world' – to third-party Large Language Models (LLMs) – while informing them that doing so means They are sending their data. Outside in a safe and private space. Jolla is using some form of color coding for messages to indicate the level of data privacy that applies (eg blue for full on-device safety; for Yak your data is exposed to commercial AI). (All privacy terms are closed).

Sarnio confirmed that performance will be at the forefront of the team's mind as they work on fine-tuning the product. “It's basically the old rule that if you want to make a breakthrough, it has to be five times better than existing solutions,” he said.

Security must also be an absolute priority. The hardware will perform tasks such as establishing a private VPN connection so that the user's mobile device or computer can communicate securely with the device. Sarneo added that there will be an encrypted cloud-based backup of user data stored in the box in case of hardware failure or damage.

Which zero-knowledge encryption architecture they choose to ensure no external access to data is possible will be an important consideration for privacy-conscious users. These details are still being worked out.

AI hardware with a purpose?

A major criticism that has been leveled at early AI devices like Humane's Ai Pin and Rabbit R1 takes the form of an awkward question: Can't it just be an app? Seeing as, you know, everyone is already packing a smartphone.

This is not an attack line that obviously applies to Jolla Mind.2. For one thing the box housing AI is meant to be static, not mobile. Stored somewhere safe at home or office. So you won't be carrying two pieces of hardware most of the time. In fact, your mobile (or desktop computer) is a common tool for interacting with JolaMind.2 – Through a chatbot-style conversational interface.

Image credit: Jolla

Sarnio gives a second major argument to justify the Jolla Mind.2 As a device it would be difficult – or indeed expensive – to scale trying to run a personal server-style approach to AI processing in the cloud.

“I think it would be very difficult to scale a cloud infrastructure if you had to run a separate local LLM for each user. It would have to run the cloud service all the time. Because it could take five minutes to restart. are, so you can't use it that way,'' he argued. Can use with phone. Also, if you want to have a multi-device environment, I think this type of personal server is the only solution.

The aforementioned knowledge base is another type of AI agent feature that will direct the user to connect to curated repositories of information to further enhance the tool's usefulness.

Sarnio gives an example of a curated information dump about deforestation in Africa. Once the knowledge base is loaded onto the device, it is available for the user to query—expanding the model's ability to help understand more about a specific topic.

“User [could say] 'Hey, I want to know about African deforestation', he explained. “Then the AI ​​agent says we have a supplier here. [who has] Create an external knowledge base about it. Would you like to connect with him? And then you can start chatting with this knowledge base. You can also ask him to create a summary or document/report about it.

“That's one of the big things we're thinking about — what we need. Classification information in the Internet,” he added. The user… can have some confidence that someone has classified that information.”

If Jolla can make this fly, it can be very clever. LLMs not only fabricate information but present fabricated nonsense as if it were the absolute truth. So how can web users surfing the ever-growing AI-generated internet content landscape be sure that what they're coming across is the right information?

The startup's answer to this accelerating cognitive crisis is for users to point their on-device AI models to their preferred source of truth. It's a pleasant human agency-based solution to Big AI's truth problem. In addition to small AI models, intelligently curated data sources can offer a more eco-friendly GenAI tool than Big AI, which has an energy-draining, compute- and data-heavy approach.

Of course, Jola will need a useful knowledge base to make this feature work. It envisages them being curated – and categorized – by users and the wider community, which it hopes will backfire on its approach. Sarnio believes that this is not a big question. He suggests that domain experts will be able to easily assemble and share useful research repositories.

Jola Mind2 Spotlights another issue: The tech users' experience with software is often far out of their control. User interfaces are routinely designed to be deliberately distracting/attractive or even downright manipulative. So another selling point for the product is about helping people reclaim their agency from all the dark patterns, mud, notifications, etc. — whatever worries you about all the apps you use. want to do You can ask the AI ​​to remove the noise for you.

Sarnio says the AI ​​model will be able to filter out third-party content. For example, a user can ask their X feed to show only AI-related posts, and not be exposed to anything else. It's the equivalent of an on-demand superpower for what you are and aren't using digitally.

“The whole idea [is] To create a peaceful digital working environment,” he added.

Sarnio knows all too well how difficult it is to convince people to buy new devices given Jolla's long backstory as an alternative smartphone maker. Not surprisingly, then, the team is also making a B2B licensing play.

That's where the startup sees the biggest potential to scale its AI device usage, he says — noting that they could have a path to selling “hundreds” or hundreds of thousands of devices through partners. Is. Jolla Community sales, he admits, are unlikely to exceed a few tens of thousands at most, matching the limited scale of his dedicated, passionate fan base.

The AI ​​component of the product is being developed under another (new) venture, called Venho AI. Along with being responsible for the software brains that power Jolla Mind2the company will act as a licensor to other businesses that want to offer their own brand versions of the personal-server-cum-AI-assistant concept.

Sarnio suggested that telcos could be a potential target customer for licensing the AI ​​model — these infrastructure operators once again look set to miss out on digital spells as tech giants look to bake generative AI into their platforms. are axes.

But, first things first. Jolla/Venho need to ship a solid AI product.

“We must first mature the software, and test and build it with the community – and then, after the summer, we'll start discussions with distribution partners.”

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment