Building AI-Powered Features with the Claude API: A Practical Guide
A practical guide to integrating Claude API into real applications — authentication, prompt design, structured outputs, error handling, and cost management.
Getting Started: API Key & SDK Setup
# Node.js
npm install @anthropic-ai/sdk
# Python
pip install anthropic
import Anthropic from '@anthropic-ai/sdk';
const client = new Anthropic({
apiKey: process.env.ANTHROPIC_API_KEY,
});
Your First API Call
const message = await client.messages.create({
model: "claude-sonnet-4-20250514",
max_tokens: 1024,
messages: [
{ role: "user", content: "Analyze this job listing for red flags: ..." }
],
});
console.log(message.content[0].text);
Prompt Engineering for Reliable, Structured Outputs
The key to production-grade AI features is predictable output. Always request JSON:
const response = await client.messages.create({
model: "claude-sonnet-4-20250514",
max_tokens: 2048,
system: `You are a job listing analyzer. Always respond in valid JSON.
Schema: { "risk_score": number (1-100), "flags": string[], "summary": string }`,
messages: [{ role: "user", content: jobDescription }],
});
const analysis = JSON.parse(response.content[0].text);
Tips for reliable structured output:
- Define the exact JSON schema in the system prompt
- Use few-shot examples for complex outputs
- Set temperature to 0.2-0.3 for analytical tasks
- Validate the output with a schema validator (zod, ajv)
Building a Scoring Engine (Jobisque Example)
In Jobisque, Claude scores job listings across 5 dimensions. The trick is using weighted scoring:
const weights = {
requirements_realism: 0.25,
compensation_transparency: 0.20,
company_legitimacy: 0.25,
role_clarity: 0.15,
red_flag_language: 0.15,
};
const overallScore = Object.entries(scores).reduce(
(total, [key, value]) => total + value * weights[key], 0
);
Cost Management
| Model | Input (1M tokens) | Output (1M tokens) | |-------|-------------------|---------------------| | Claude Haiku 4.5 | $0.80 | $4.00 | | Claude Sonnet 4.6 | $3.00 | $15.00 | | Claude Opus 4.6 | $15.00 | $75.00 |
Cost optimization strategies:
- Use Haiku for simple tasks (classification, extraction)
- Cache system prompts — Anthropic's prompt caching reduces costs by up to 90% for repeated system prompts
- Limit max_tokens to what you actually need
- Batch requests when possible
Error Handling in Production
try {
const response = await client.messages.create({ ... });
} catch (error) {
if (error.status === 429) {
// Rate limited — implement exponential backoff
await sleep(Math.pow(2, retryCount) * 1000);
} else if (error.status === 529) {
// API overloaded — fallback to cached response or queue
}
}
Want AI integrated into your product? Book a call and let's scope it.
Want more like this?
Get the free toolkit + occasional tips on React Native, Next.js, and AI.
No spam. Unsubscribe anytime.