Your personal AI assistant that works offline and is private.

Private AI Assistant is a personal AI assistant that works offline and is private. It is designed to be your personal assistant, helping you with your daily tasks and providing you with information when you need it.

Feature Image
Feature Image

Import your own training or fine-tuned models

Import your own customized LLM models to fufil personal or business needs without concerning about privacy and data leakage.

Choose from latest carefully choosed SOTA opensource models

There are carefully choosed open source models to be able to download on your device. Including Meta's LLaMa 3.2, Phi-3, Mixtral, Yi, Gemma-2b, QWen2.5 etc.

Feature Image

Support Ollama and other compatible LLM servers

Support Ollama and other compatible LLM servers. You can access more powerful models in your home or office network. Keep your data private and access most powerful models in your private network.

Feature Image

Awesome Features

It is designed to work on your device without needing internet access. It is private and secure, and it is always available when you need it. There are carefully choosed open source models to be able to download on your device. You can even import your own customized models. Ollama is supported too. So you can access more powerful models in your home or office network. Privacy is the highest priority.

Various on-device Models

Download or import various opensource or self-trained models to meet your personal or business needs. Never worry about data leakage and privacy.

Ollama Support

Connect to Ollama compatible servers. You can access more powerful models in your home or office network while keeping data in your private network.

Share anything to your assistant

You can share almost anything to your assistant. It can be a text, an image, a file, a link, a video, a voice recording, pdf, epub, etc. Then ask any questions about it.

Markdown Web Browser

It embed a simple yet powerful web browser. It turns the target web page into markdown format for sake of reading. You can use the assistant to summarize, translate, or ask questions about the web page.

Voice Transcribe

It has a voice recognition and transcription function based on the open source Whisper model. You can use it to transcribe your voice to text, summarize, or translate.

Commercial Model Support

If you want to combine high-end commercial models' ability with the assistant, you can do it. Input your API token and is ready to go.

Use Cases

The application is designed to be simple and easy to use. It provides various utilities to make your life easier. You can use it to chat, summarize, translate, or ask questions about anything. Thanks for open-source community, you can also use it to understand photos, transcribe voice, and draw pictures.

Ask Questions

Nowadays many open-source models achieve SOTA performance as well as commercial models. Maybe the their overall score is not the best, but different models have different strengths. You can choose the most suitable one for your task. Some are good at chat, some are good at math, some are good at coding. Our application allows you to choose different models for different tasks. You can even import your own customized models.

Text Models

LLaMA 1 & 2 & 3, Mistral 7B, Mixtral MoE, DBRX, Falcon, Chinese LLaMA / Alpaca and Chinese LLaMA-2 / Alpaca-2, Vigogne (French), BERT, Koala, Baichuan 1 & 2 + derivations, Aquila 1 & 2, Starcoder models, Refact, MPT, Bloom, Yi models, StableLM models, Deepseek models, Qwen models, PLaMo-13B, Phi models, GPT-2, Orion 14B, InternLM2, CodeShell, Gemma, Mamba, Grok-1, Xverse, Command-R models, SEA-LION, GritLM-7B + GritLM-8x7B, OLMo, OLMoE, Granite models, GPT-NeoX + Pythia, Snowflake-Arctic MoE, Smaug, Poro 34B, Bitnet b1.58 models, Flan T5, Open Elm models, ChatGLM3-6b + ChatGLM4-9b, SmolLM, EXAONE-3.0-7.8B-Instruct, FalconMamba Models, Jais, Bielik-11B-v2.3, RWKV-6

Feature Image
Feature Image

Understand Photos

You can upload images or share images from other apps. Then ask any questions about the image. It can be a photo, a screenshot, a drawing, etc. You can even scan receipts or documents and get its content for further processing.

Visual Models

LLaVA 1.5 models, LLaVA 1.6 models, BakLLaVA, Obsidian, ShareGPT4V, MobileVLM 1.7B/3B models, Yi-VL, Mini CPM, Moondream, Bunny

Transcribe Voice

You can do real-time voice recognition and transcription or upload a voice file. It can be a voice recording, a live conversation, a podcast, etc. Then ask any questions about the voice recording.

Audio Models

Whisper, Whisper-large-v3, Whisper-large-v2, Whisper-large-v1, Whisper-tiny, Whisper-small, Whisper-medium, Whisper-large

Feature Image
Feature Image

Share Anything

You can share any content from any apps to this assistant. It can be a text, a link, a file, etc. The AI Assistant understands a lot of different content types including URL, PDF, epub, markdown, images, etc. It will read the content and then you can ask any questions about the content.

Share to AI Assistant

The AI Assistant can process URL links, PDF files, epub files, markdown files, images, etc. You can share them to the AI Assistant and ask any questions about the content.

Share from AI Assistant

You can also share chat history to other apps. It supports sharing as plain text, markdown, HTML or JSON format.

Tell to The People How to Logout

Fusce leo neque, lacinia at tempor vitae, porta at arcu. Vestibulum varius non dui at pulvinar. Ut egestas orci in quam sollicitudin aliquet.

Our Mission

Ut egestas orci in quam sollicitudin aliquet.Duis bibendum diam non erat facilaisis tincidunt. Fusce leo neque, lacinia at tempor vitae.

Great App Musics

Ut egestas orci in quam sollicitudin aliquet.Duis bibendum diam non erat facilaisis tincidunt. Fusce leo neque, lacinia at tempor vitae.

Feature Image

Screenshots of App

Here are some screenshots of the app.

FAQS

Here are some frequently asked questions about the app.

Which model types are supported now?

This application use llama.cpp to run the model. Now it supports all quantized models in GGML format. There are a lot of quantization types for GGUF. You can check there size and performance in the following table.

Model size [GB] MMLU [%] bpw Model Quant
7.43 65.23 8.50 8B Q8_0
5.73 65.06 6.56 8B Q6_K
5.00 64.90 5.67 8B Q5_K_M
4.87 64.88 5.50 8B Q5_K_S
4.30 64.64 4.82 8B Q4_K_M
4.09 64.63 4.54 8B Q4_K_S
4.07 64.33 4.52 8B IQ4_NL
3.87 64.39 4.28 8B IQ4_XS
3.81 62.85 4.08 8B Q3_K_L
3.53 62.89 3.79 8B Q3_K_M
3.31 62.55 3.50 8B IQ3_M
3.21 62.13 3.46 8B IQ3_S
3.20 59.14 3.44 8B Q3_K_S
3.06 61.19 3.26 8B IQ3_XS
2.83 60.52 3.04 8B IQ3_XXS
2.79 55.90 2.90 8B Q2_K
2.53 57.56 2.64 8B IQ2_M
2.35 53.98 2.40 8B IQ2_S
2.26 49.98 2.37 8B IQ2_XS
2.07 43.50 2.14 8B IQ2_XXS
1.85 28.83 1.84 8B IQ1_M
1.71 26.47 1.66 8B IQ1_S

What is the minimum system requirements?

For offline use, you should have an iPhone 13 pro or later or iPad Pro 2020 or later devices with iOS 17 or later system. For online use, the minimum requirement is any devices with iOS 17 or later.

  • iPhone 13 pro & pro max
  • iPhone 14 pro & pro max
  • iPhone 15 pro & pro max
  • iPhone 16 pro & pro max
  • iPad Pro (4th) 2020
  • iPad Pro (5th) 2021
  • iPad Pro (6th) 2022
  • iPad Pro (7th) 2024

What is the best model size for offline use?

The best model size for offline use is 3B. If you have a high-end iPad Pro, you can try 7B model.

  • 1B
  • 1.5B
  • 2B
  • 3B

How to protect my privacy?

If you use offline model, no data will be shared to servers. If you can Ollama local network model, your data will be sent to or from Ollama server. The application does not have any server side so it does not collect any data from you.

Contact Us

If you have any questions or suggestions, please contact us.

Coming Soon

000

days

00

hours

00

minutes

00

seconds