CyberDeck
Click the photo to find out more!
Problem:
I wanted to explore how to run a large language model locally without internet access, so the AI could process input and generate responses entirely offline. At first I didn’t know what I would use this project for, but it was a good way to practice Python and integrate different tools.
Method:
Downloading the Model & Setup:
Chose DeepSeek as the LLM because it’s open source and easy to run locally.
Installed via Ollama.
Installed Python + pip, and added the libraries:
requests (HTTP requests)
pyttsx3 (text-to-speech)
PyAudio (audio input/output)
SpeechRecognition (voice-to-text)
json (data handling)
These allowed the AI to listen, understand, and speak back.
Creating Code for the Model:
Wrote first Python script: chatbot loaded and responded to the text “hi.”
Basic interaction worked, but only one exchange at a time.
Testing the Model:
Wrote second script: similar to first, but added text-to-speech output.
AI responded to “hi” and produced an audio file that played automatically.
Talking to the Model:
Expanded script to add speech recognition.
This let me speak to the AI directly and hear it reply.
Difficult step, but enabled full voice interaction.
Adding Wake Word & Kill Switch:
Added a wake word so the AI only listens when triggered (prevents it from responding to background sounds).
Added “end chat” command to return it to wake word listening mode.
Accidentally created a kill switch while testing, which turned into a useful feature to stop the program entirely.
Result:
Completed a fully offline voice chatbot using DeepSeek + Ollama.
System can:
Wake on command
Recognize speech input
Generate a response with the local LLM
Speak the response back with TTS
Return to listening mode or shut down on command
A personal milestone project—my first major Python build—that proved I could integrate AI, audio, and interaction locally without relying on internet servers.


