Show HN: I built and AI phone system and wrote a step by step instructions (www.yadalog.com)

🤖 AI Summary
A developer posted a step‑by‑step tutorial for building a real‑time AI phone system that streams live calls through Twilio Media Streams into a FastAPI WebSocket bridge, forwards audio to the OpenAI Realtime API, and stores transcripts/summaries in Supabase. The guide covers prerequisites (Twilio number, OpenAI Realtime access, Supabase project, Python 3.8+, ngrok), dependency installation, a Twilio webhook that returns TwiML to connect calls to a WebSocket, and a bridge that handles Twilio media events (connected, start, media, stop) and OpenAI response.audio.delta for bidirectional streaming. Key technical details: audio conversion between Twilio’s μ‑law 8 kHz and OpenAI’s PCM16 24 kHz is handled with audioop‑lts, async tasks manage low‑latency bidirectional media, and a RAG (retrieval‑augmented generation) layer tied to Supabase (tables: calls, call_transcripts, user_settings, agent_prompts, knowledge_base) provides contextualization and personalized prompts. The system saves line‑by‑line transcripts, generates AI summaries, and supports multi‑tenant deployments. This approach enables 24/7 AI receptionists, automated voicemail with transcripts, live customer support and sales qualification, and workflow/CRM triggers—offering a scalable, cost‑effective alternative to traditional call centers while surfacing practical implementation details for engineers.
Loading comments...
loading comments...