Integration with JuiceCore
JuiceCore is fully compatible with the OpenAI API, which means maximum ease of integration. Any library, framework, or tool that works with OpenAI will work seamlessly with JuiceCore — without the need to rewrite code!
🎯 What is OpenAI Compatibility?
This means that JuiceCore uses the same request and response format as the official OpenAI API. You can simply replace the endpoint and API key — and everything works!
Benefits:
- ✅ No special SDKs or libraries required
- ✅ Use familiar tools (OpenAI SDK, LangChain, LlamaIndex, etc.)
- ✅ Minimal changes to existing code
- ✅ Support for all features: streaming, temperature, max_tokens, etc.
🔄 Quick Start: 3 Steps to Integrate
To start using JuiceCore instead of OpenAI, just three simple changes are enough:
Step 1: Change Base URL
Instead of https://api.openai.com/v1, use:
https://api.juicecore.xyz/v1
Step 2: Use your JuiceCore API Key
Get a free key at juicecore.xyz after registration. Keys have the format jk-xxxxxxxxxx.
Step 3: Choose a Model
JuiceCore offers three specialized models:
- JuiceAi-Fast — for quick answers, chats, translations (almost unlimited on the Free plan!)
- JuiceAi-Pro — for complex tasks, analytics, creative content
- JuiceAi-Coder — for programming, code review, debugging
Everything else remains absolutely unchanged! Your code continues to work the same way.
🐍 Python
Python is one of the most popular languages for AI work. JuiceCore works great with the official openai library.
Installation
First, install the official OpenAI SDK (if not already installed):
pip install openai
Note: Requires
openai>=1.0.0. Check version:pip show openai
Basic Example
The simplest way to start working with JuiceCore:
from openai import OpenAI
# Initialize JuiceCore client
client = OpenAI(
api_key="jk-your-api-key-here", # Your JuiceCore API Key
base_url="https://api.juicecore.xyz/v1" # JuiceCore endpoint
)
# Create request to JuiceAi-Fast model
response = client.chat.completions.create(
model="JuiceAi-Fast",
messages=[
{"role": "user", "content": "Hello! How are you?"}
]
)
# Get and print the response
print(response.choices[0].message.content)
What is happening here?
1. We create an OpenAI client but point base_url to JuiceCore.
2. We call the standard chat.completions.create() method.
3. We receive a response in the same format as from OpenAI.
Using JuiceAi-Pro for Complex Tasks
JuiceAi-Pro is ideal for analytics, article writing, and creative tasks:
from openai import OpenAI
client = OpenAI(
api_key="jk-your-api-key-here",
base_url="https://api.juicecore.xyz/v1"
)
response = client.chat.completions.create(
model="JuiceAi-Pro", # Use Pro for complex tasks
messages=[
{
"role": "system",
"content": "You are an expert in data analysis and business analytics"
},
{
"role": "user",
"content": "Analyze the main e-commerce sales trends for Q4 2024 and provide recommendations for Q1 2025"
}
],
temperature=0.7, # Creativity (0.0 - deterministic, 1.0 - maximum creativity)
max_tokens=1500 # Maximum response length
)
print(response.choices[0].message.content)
Request Parameters:
- temperature — controls creativity (0.0-2.0, recommended 0.7-1.0)
- max_tokens — limits response length
- top_p — alternative to temperature (nucleus sampling)
- frequency_penalty — reduces word repetition (-2.0 to 2.0)
- presence_penalty — encourages new topics (-2.0 to 2.0)
Using JuiceAi-Coder for Programming
JuiceAi-Coder is a specialized model for developers:
from openai import OpenAI
client = OpenAI(
api_key="jk-your-api-key-here",
base_url="https://api.juicecore.xyz/v1"
)
response = client.chat.completions.create(
model="JuiceAi-Coder", # Model for code
messages=[
{
"role": "user",
"content": """Write a Python function to check if a number is prime.
The function should be optimized and include a docstring."""
}
],
temperature=0.3 # Better to use lower temperature for code
)
print(response.choices[0].message.content)
Example for Code Review:
code_to_review = """
def calculate_total(items):
total = 0
for item in items:
total = total + item['price'] * item['quantity']
return total
"""
response = client.chat.completions.create(
model="JuiceAi-Coder",
messages=[
{
"role": "system",
"content": "You are an experienced Python developer doing a code review"
},
{
"role": "user",
"content": f"Do a code review of this code and suggest improvements:\n\n{code_to_review}"
}
]
)
print(response.choices[0].message.content)
Streaming (Streaming Responses)
Streaming allows you to receive the response in chunks in real-time — ideal for chat interfaces:
from openai import OpenAI
client = OpenAI(
api_key="jk-your-api-key-here",
base_url="https://api.juicecore.xyz/v1"
)
# Enable streaming: stream=True
stream = client.chat.completions.create(
model="JuiceAi-Fast",
messages=[
{"role": "user", "content": "Tell an interesting story about space travel"}
],
stream=True # Key parameter!
)
# Receive and print each chunk
for chunk in stream:
if chunk.choices[0].delta.content is not None:
print(chunk.choices[0].delta.content, end="", flush=True)
print() # New line at the end
Why use streaming?
- ⚡ User sees the response immediately
- 📱 Better UX for chat applications
- 🔄 Feeling of "live" conversation
Working with Context (Dialogue History)
To create a full-fledged chatbot, you need to store the conversation history:
from openai import OpenAI
client = OpenAI(
api_key="jk-your-api-key-here",
base_url="https://api.juicecore.xyz/v1"
)
# Dialogue history
conversation_history = [
{"role": "system", "content": "You are a helpful AI assistant"}
]
def chat(user_message):
# Add user message
conversation_history.append({
"role": "user",
"content": user_message
})
# Get response
response = client.chat.completions.create(
model="JuiceAi-Fast",
messages=conversation_history
)
# Save assistant response
assistant_message = response.choices[0].message.content
conversation_history.append({
"role": "assistant",
"content": assistant_message
})
return assistant_message
# Usage
print(chat("Hello! What is my name?"))
print(chat("My name is Alex"))
print(chat("And what is my name?")) # AI will remember the name from context
Error Handling
Always add error handling for the reliability of your application:
from openai import OpenAI, APIError, RateLimitError, APIConnectionError
client = OpenAI(
api_key="jk-your-api-key-here",
base_url="https://api.juicecore.xyz/v1"
)
def safe_chat(message, model="JuiceAi-Fast"):
try:
response = client.chat.completions.create(
model=model,
messages=[{"role": "user", "content": message}]
)
return response.choices[0].message.content
except RateLimitError:
return "Error: Request limit exceeded. Try again later."
except APIConnectionError:
return "Error: Connection problem with API."
except APIError as e:
return f"API Error: {str(e)}"
except Exception as e:
return f"Unknown error: {str(e)}"
# Usage
result = safe_chat("Hello!")
print(result)
Using Environment Variables (Recommended!)
Never store API keys directly in code! Use environment variables:
import os
from openai import OpenAI
# API key is taken from environment variable
client = OpenAI(
api_key=os.getenv("JUICECORE_API_KEY"),
base_url="https://api.juicecore.xyz/v1"
)
# Check if key is set
if not os.getenv("JUICECORE_API_KEY"):
raise ValueError("JUICECORE_API_KEY is not set!")
response = client.chat.completions.create(
model="JuiceAi-Fast",
messages=[{"role": "user", "content": "Hello!"}]
)
print(response.choices[0].message.content)
How to set an environment variable:
Linux/Mac:
export JUICECORE_API_KEY="jk-your-api-key-here"
Windows (PowerShell):
$env:JUICECORE_API_KEY="jk-your-api-key-here"
Or use a .env file with the python-dotenv library:
pip install python-dotenv
Create a .env file:
JUICECORE_API_KEY=jk-your-api-key-here
In your code:
from dotenv import load_dotenv
import os
from openai import OpenAI
# Load variables from .env file
load_dotenv()
client = OpenAI(
api_key=os.getenv("JUICECORE_API_KEY"),
base_url="https://api.juicecore.xyz/v1"
)
🟨 JavaScript / Node.js
JavaScript/Node.js is a popular choice for web applications and server-side apps.
Installation
npm install openai
Or using Yarn:
yarn add openai
Basic Example (ES Modules)
import OpenAI from 'openai';
// Initialize client
const client = new OpenAI({
apiKey: 'jk-your-api-key-here',
baseURL: 'https://api.juicecore.xyz/v1'
});
async function main() {
const response = await client.chat.completions.create({
model: 'JuiceAi-Fast',
messages: [
{ role: 'user', content: 'Hello! How are you?' }
]
});
console.log(response.choices[0].message.content);
}
main();
Basic Example (CommonJS)
If your project uses CommonJS (require):
const OpenAI = require('openai');
const client = new OpenAI({
apiKey: 'jk-your-api-key-here',
baseURL: 'https://api.juicecore.xyz/v1'
});
async function main() {
const response = await client.chat.completions.create({
model: 'JuiceAi-Fast',
messages: [
{ role: 'user', content: 'Hello! How are you?' }
]
});
console.log(response.choices[0].message.content);
}
main().catch(console.error);
Using JuiceAi-Coder for Code Generation
import OpenAI from 'openai';
const client = new OpenAI({
apiKey: 'jk-your-api-key-here',
baseURL: 'https://api.juicecore.xyz/v1'
});
async function generateCode(prompt) {
try {
const response = await client.chat.completions.create({
model: 'JuiceAi-Coder',
messages: [
{
role: 'system',
content: 'You are an expert in Node.js and Express.js'
},
{
role: 'user',
content: prompt
}
],
temperature: 0.3, // Low temperature for code
max_tokens: 1000
});
return response.choices[0].message.content;
} catch (error) {
console.error('Error generating code:', error.message);
throw error;
}
}
// Usage
const code = await generateCode('Create a REST API on Express.js with CRUD operations for users');
console.log(code);
Streaming in Node.js
Streaming for real-time responses:
import OpenAI from 'openai';
const client = new OpenAI({
apiKey: 'jk-your-api-key-here',
baseURL: 'https://api.juicecore.xyz/v1'
});
async function streamResponse(userMessage) {
const stream = await client.chat.completions.create({
model: 'JuiceAi-Pro',
messages: [
{ role: 'user', content: userMessage }
],
stream: true
});
console.log('AI Response:');
for await (const chunk of stream) {
const content = chunk.choices[0]?.delta?.content || '';
process.stdout.write(content);
}
console.log('\n');
}
// Usage
await streamResponse('Explain quantum computers in simple terms');
Express.js API endpoint
Example of integrating JuiceCore into an Express.js application:
import express from 'express';
import OpenAI from 'openai';
const app = express();
app.use(express.json());
const client = new OpenAI({
apiKey: process.env.JUICECORE_API_KEY,
baseURL: 'https://api.juicecore.xyz/v1'
});
// Chat Endpoint
app.post('/api/chat', async (req, res) => {
try {
const { message, model = 'JuiceAi-Fast' } = req.body;
if (!message) {
return res.status(400).json({ error: 'Message is required' });
}
const response = await client.chat.completions.create({
model: model,
messages: [
{ role: 'user', content: message }
]
});
res.json({
success: true,
response: response.choices[0].message.content
});
} catch (error) {
console.error('Error:', error);
res.status(500).json({
success: false,
error: error.message
});
}
});
// Streaming Endpoint
app.post('/api/chat/stream', async (req, res) => {
try {
const { message, model = 'JuiceAi-Fast' } = req.body;
// Set headers for SSE (Server-Sent Events)
res.setHeader('Content-Type', 'text/event-stream');
res.setHeader('Cache-Control', 'no-cache');
res.setHeader('Connection', 'keep-alive');
const stream = await client.chat.completions.create({
model: model,
messages: [{ role: 'user', content: message }],
stream: true
});
for await (const chunk of stream) {
const content = chunk.choices[0]?.delta?.content || '';
if (content) {
res.write(`data: ${JSON.stringify({ content })}\n\n`);
}
}
res.write('data: [DONE]\n\n');
res.end();
} catch (error) {
console.error('Error:', error);
res.status(500).json({ error: error.message });
}
});
app.listen(3000, () => {
console.log('Server running on http://localhost:3000');
});
React Example (Frontend)
Using JuiceCore API in a React application:
import { useState } from 'react';
import OpenAI from 'openai';
function ChatComponent() {
const [messages, setMessages] = useState([]);
const [input, setInput] = useState('');
const [loading, setLoading] = useState(false);
// WARNING: Never use API key on the client in production!
// This example is for demonstration. In production use a backend proxy.
const client = new OpenAI({
apiKey: 'jk-your-api-key-here',
baseURL: 'https://api.juicecore.xyz/v1',
dangerouslyAllowBrowser: true // Only for development!
});
const sendMessage = async () => {
if (!input.trim()) return;
const userMessage = { role: 'user', content: input };
setMessages(prev => [...prev, userMessage]);
setInput('');
setLoading(true);
try {
const response = await client.chat.completions.create({
model: 'JuiceAi-Fast',
messages: [...messages, userMessage]
});
const assistantMessage = {
role: 'assistant',
content: response.choices[0].message.content
};
setMessages(prev => [...prev, assistantMessage]);
} catch (error) {
console.error('Error:', error);
alert('Error sending message');
} finally {
setLoading(false);
}
};
return (
<div>
<div className="messages">
{messages.map((msg, idx) => (
<div key={idx} className={msg.role}>
{msg.content}
</div>
))}
</div>
<input
value={input}
onChange={(e) => setInput(e.target.value)}
onKeyPress={(e) => e.key === 'Enter' && sendMessage()}
disabled={loading}
/>
<button onClick={sendMessage} disabled={loading}>
{loading ? 'Sending...' : 'Send'}
</button>
</div>
);
}
export default ChatComponent;
⚠️ Important for Production:
Never use the API key directly on the client! Create a backend endpoint that proxies requests to JuiceCore.
Error Handling in JavaScript
import OpenAI from 'openai';
const client = new OpenAI({
apiKey: process.env.JUICECORE_API_KEY,
baseURL: 'https://api.juicecore.xyz/v1'
});
async function safeChat(message, model = 'JuiceAi-Fast') {
try {
const response = await client.chat.completions.create({
model: model,
messages: [{ role: 'user', content: message }],
timeout: 30000 // 30 seconds timeout
});
return {
success: true,
content: response.choices[0].message.content
};
} catch (error) {
if (error.status === 429) {
return {
success: false,
error: 'Request limit exceeded. Try again later.'
};
}
if (error.status === 401) {
return {
success: false,
error: 'Invalid API Key.'
};
}
if (error.code === 'ECONNABORTED') {
return {
success: false,
error: 'Request timeout. Try again.'
};
}
return {
success: false,
error: `Error: ${error.message}`
};
}
}
// Usage
const result = await safeChat('Hello!');
if (result.success) {
console.log(result.content);
} else {
console.error(result.error);
}
🌐 Other Programming Languages
JuiceCore works with any language that supports HTTP requests! Below are examples for popular languages.
cURL (Command Line)
The simplest way to test the API without code:
curl https://api.juicecore.xyz/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer jk-your-api-key-here" \
-d '{
"model": "JuiceAi-Fast",
"messages": [
{
"role": "user",
"content": "Hello! How are you?"
}
]
}'
Example with streaming:
curl https://api.juicecore.xyz/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer jk-your-api-key-here" \
-d '{
"model": "JuiceAi-Fast",
"messages": [{"role": "user", "content": "Tell a fairytale"}],
"stream": true
}'
Example with extra parameters:
curl https://api.juicecore.xyz/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer jk-your-api-key-here" \
-d '{
"model": "JuiceAi-Pro",
"messages": [
{"role": "system", "content": "You are a marketing expert"},
{"role": "user", "content": "Create a marketing campaign plan"}
],
"temperature": 0.8,
"max_tokens": 1500
}'
PHP
<?php
function juiceCoreChat($message, $model = 'JuiceAi-Fast') {
$apiKey = 'jk-your-api-key-here';
$url = 'https://api.juicecore.xyz/v1/chat/completions';
$data = [
'model' => $model,
'messages' => [
['role' => 'user', 'content' => $message]
],
'temperature' => 0.7
];
$ch = curl_init($url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_POST, true);
curl_setopt($ch, CURLOPT_POSTFIELDS, json_encode($data));
curl_setopt($ch, CURLOPT_HTTPHEADER, [
'Content-Type: application/json',
'Authorization: Bearer ' . $apiKey
]);
$response = curl_exec($ch);
$httpCode = curl_getinfo($ch, CURLINFO_HTTP_CODE);
curl_close($ch);
if ($httpCode !== 200) {
throw new Exception("API Error: HTTP $httpCode");
}
$result = json_decode($response, true);
return $result['choices'][0]['message']['content'];
}
// Usage
try {
$response = juiceCoreChat('Hello! How are you?');
echo $response;
} catch (Exception $e) {
echo "Error: " . $e->getMessage();
}
?>
PHP with Wrapper Class:
<?php
class JuiceCoreClient {
private $apiKey;
private $baseUrl = 'https://api.juicecore.xyz/v1';
public function __construct($apiKey) {
$this->apiKey = $apiKey;
}
public function chat($messages, $model = 'JuiceAi-Fast', $options = []) {
$url = $this->baseUrl . '/chat/completions';
$data = array_merge([
'model' => $model,
'messages' => $messages
], $options);
$ch = curl_init($url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_POST, true);
curl_setopt($ch, CURLOPT_POSTFIELDS, json_encode($data));
curl_setopt($ch, CURLOPT_HTTPHEADER, [
'Content-Type: application/json',
'Authorization: Bearer ' . $this->apiKey
]);
$response = curl_exec($ch);
$httpCode = curl_getinfo($ch, CURLINFO_HTTP_CODE);
if ($httpCode !== 200) {
$error = json_decode($response, true);
throw new Exception($error['error']['message'] ?? 'Unknown error');
}
curl_close($ch);
return json_decode($response, true);
}
}
// Usage
$client = new JuiceCoreClient('jk-your-api-key-here');
$response = $client->chat([
['role' => 'user', 'content' => 'Write a poem about programming']
], 'JuiceAi-Pro', [
'temperature' => 0.9,
'max_tokens' => 500
]);
echo $response['choices'][0]['message']['content'];
?>
Go (Golang)
package main
import (
"bytes"
"encoding/json"
"fmt"
"io/ioutil"
"net/http"
)
type Message struct {
Role string `json:"role"`
Content string `json:"content"`
}
type ChatRequest struct {
Model string `json:"model"`
Messages []Message `json:"messages"`
Temperature float64 `json:"temperature,omitempty"`
MaxTokens int `json:"max_tokens,omitempty"`
}
type ChatResponse struct {
Choices []struct {
Message Message `json:"message"`
} `json:"choices"`
}
func main() {
apiKey := "jk-your-api-key-here"
url := "https://api.juicecore.xyz/v1/chat/completions"
requestBody, _ := json.Marshal(ChatRequest{
Model: "JuiceAi-Fast",
Messages: []Message{
{Role: "user", Content: "Hello from Go!"},
},
})
req, _ := http.NewRequest("POST", url, bytes.NewBuffer(requestBody))
req.Header.Set("Content-Type", "application/json")
req.Header.Set("Authorization", "Bearer "+apiKey)
client := &http.Client{}
resp, err := client.Do(req)
if err != nil {
panic(err)
}
defer resp.Body.Close()
body, _ := ioutil.ReadAll(resp.Body)
var result ChatResponse
json.Unmarshal(body, &result)
fmt.Println(result.Choices[0].Message.Content)
}