Arrow
Quick Help Guide

FAQ

LIVA — Frequently Asked Questions

What is LIVA?

LIVA is a real-time avatar engine that makes AI conversations feel face-to-face. Avatars render on the client (any device or browser), so you don’t need cloud GPUs.

How is LIVA different from regular chatbots?

Chatbots type. LIVA talks, gestures, and reacts. You get lifelike lip-sync, expressions, and non-verbal cues—so the conversation feels human, not scripted.

What makes LIVA technically unique?

A patent-pending interactive-streaming protocol renders video locally on users’ devices. That cuts infrastructure costs dramatically and keeps latency ultra-low.

Which models can I use?

Bring your own: OpenAI, Google Gemini, Anthropic, Mistral, Cohere, local endpoints—mix and switch as you like. LIVA is model-agnostic.

Does it work on my stack?

Yes. LIVA runs in modern web browsers and inside mobile apps via WebViews today. A React Native SDK is planned, with native iOS/Android SDKs to follow on the roadmap.

How fast is it?

Sub-second end-to-end in typical conditions (network dependent). The goal is to keep it “blink-fast” so turn-taking feels natural.

Do I need GPUs or special servers?

No. Rendering happens client-side. Your only server costs are your LLM/TTS usage and routine API calls.

Can I upload my own knowledge?

Yes. Add PDFs, docs, FAQs, or URLs. LIVA indexes your content so avatars answer with brand-accurate information. Update it anytime.

Can I control the avatar’s look and personality?

Completely. Theme outfits, backgrounds, camera framing, gestures, tone, and safety level. Save presets per product or market.

Can I clone myself (face and voice)?

Avatar & voice cloning is coming soon. You’ll be able to turn a single photo and short audio sample into your digital twin. (Early access waitlist available.)

What about languages and voices?

LIVA supports multiple major languages and a range of TTS providers (e.g., ElevenLabs). Pick voices per locale; swap providers as needed.

Is LIVA private and secure?

Rendering is local to the device. You choose where model calls go (your preferred LLM provider). Conversation storage is opt-in. Enterprise controls (logging, redaction, data retention) are available.

Can it run offline?

Avatars render locally, but most real-time experiences need a network connection for LLM/TTS. You can cache scripts for limited offline flows.

What are common use cases?
  • Customer support: human-feeling help on web, app, or kiosk
  • Digital concierge: travel, hospitality, and public spaces
  • Education & training: tutors, onboarding, compliance lessons
  • Sales & marketing: product demos, guided shopping, lead capture
How many users can it handle?

Because rendering happens on users’ devices, LIVA scales horizontally with your audience. Serve thousands of simultaneous conversations without GPU farms.

How do I integrate LIVA?

Add a few lines of JS to embed an avatar widget, point it at your model endpoint, and (optionally) connect your knowledge base. Sample projects and a quick-start guide are provided.

What does it cost?

For developers, pricing is usage-based (per-token/voice minutes). Businesses can license custom avatars and enterprise features. The consumer app offers a free tier with an optional premium subscription. (See the pricing page for current details.)

Can I use LIVA for kids or regulated industries?

Yes, with the right guardrails. You can enforce content policies, routing, redaction, and human-handoff. For regulated deployments, talk to us about compliance needs.

Do you have real-world deployments?

Yes. For example, LIVA powers a smart concierge in Dubai taxis, helping riders get instant guidance during trips.

How do I get support?

Docs, sample code, and community channels are available for builders. Enterprise customers get priority support and SLAs.