❓ Frequently Asked Questions (FAQ)

This section answers the most common questions about JuiceCore — a simple and accessible AI API platform with automatic model selection.


🚀 General

What is JuiceCore?

JuiceCore is an AI API platform with intelligent routing that automatically selects the best current AI model for each request.

Instead of manually managing dozens of models and their versions, you work with only three logical modes:

  • JuiceAi-Fast — fast answers and minimal latency
  • JuiceAi-Pro — maximum quality and universal tasks
  • JuiceAi-Coder — programming, code analysis, technical tasks

All complex logic for model selection, updates, and fallback routing is performed automatically on the JuiceCore side.


How does JuiceCore differ from using OpenAI, Anthropic, or Google directly?

  • Simpler: only 3 logical models instead of dozens of real ones
  • Cheaper: optimized routing focusing on price/quality
  • Automatic: fallback between providers without developer participation
  • Current: new models are connected without changes to your code
  • Compatible: full OpenAI API compatibility

JuiceCore allows you to focus on the product, not on constant model management.


🤖 Models

Which AI models does JuiceCore use?

JuiceCore works with flagship modern LLMs, specifically:

  • GPT‑5.2 (OpenAI)
  • Claude Opus 4.5 (Anthropic)
  • Gemini 3 Pro (Google)
  • other current models

The specific model and its version are selected dynamically for each request depending on task type, load, and availability.


Can I manually choose a specific model (e.g. GPT‑5.2)?

No. JuiceCore intentionally hides specific models from the user.

The goal of the platform is to free the developer from the need to track versions, compare quality, and manually switch models.

You always work with a logical mode, and the system ensures the optimal result.


Can the model change between requests?

Yes. JuiceCore may use different models for different requests.

This allows to:

  • automatically improve answer quality
  • avoid downtime of individual providers
  • quickly connect new models

For the user, this happens transparently and does not require code changes.


🔑 API and Integration

Is JuiceCore compatible with OpenAI API?

Yes. JuiceCore is fully compatible with OpenAI API.

In most cases, it is sufficient to:

  • replace base_url with https://api.juicecore.xyz/v1
  • use the JuiceCore API key

Libraries and tools (OpenAI SDK, LangChain, LlamaIndex, etc.) work without additional configuration.


Which programming languages are supported?

Any languages that can make HTTP requests:

  • Python
  • JavaScript / Node.js
  • PHP
  • Java
  • Go
  • C#
  • Ruby
  • and others

Can JuiceCore be used in production?

Yes. JuiceCore is suitable for:

  • startups and MVPs
  • SaaS products
  • chatbots and AI assistants
  • internal tools

The platform is optimized for stable daily operation without complex configurations.


⚡ Performance and Stability

💡 Important
JuiceCore offers access to modern AI models at a price significantly lower than the market.
In rare cases, short delays or dynamic routing changes are possible, but the system automatically adapts and chooses the best available option.
For most users, these processes pass unnoticed.


Is streaming supported?

Yes. JuiceCore supports streaming responses in real-time for all logical models.


🔒 Security and Privacy

Does JuiceCore store my requests?

JuiceCore stores only technical metadata necessary for analytics and billing (request time, token count, logical model type).

Message content is not stored.


Is data transmitted securely?

Yes. All requests are executed via HTTPS with encryption.


📌 Need more information?

Check out other sections of the documentation or write to support on the juicecore.xyz website.