AI Chatbot
Click the photo to find out more!
Problem:
I wanted to explore how to run a large language model locally without internet access, so the AI could process input and generate responses entirely offline. At first I didn’t know what I would use this project for, but it was a good way to practice Python and integrate different tools.
Method:
Downloading the Model & Setup:
Chose DeepSeek as the LLM because it’s open source and easy to run locally.
Installed via Ollama.
Installed Python + pip, and added the libraries:
requests (HTTP requests)
pyttsx3 (text-to-speech)
PyAudio (audio input/output)
SpeechRecognition (voice-to-text)
json (data handling)
These allowed the AI to listen, understand, and speak back.
Creating Code for the Model:
Wrote first Python script: chatbot loaded and responded to the text “hi.”
Basic interaction worked, but only one exchange at a time.
Testing the Model:
Wrote second script: similar to first, but added text-to-speech output.
AI responded to “hi” and produced an audio file that played automatically.
Talking to the Model:
Expanded script to add speech recognition.
This let me speak to the AI directly and hear it reply.
Difficult step, but enabled full voice interaction.
Adding Wake Word & Kill Switch:
Added a wake word so the AI only listens when triggered (prevents it from responding to background sounds).
Added “end chat” command to return it to wake word listening mode.
Accidentally created a kill switch while testing, which turned into a useful feature to stop the program entirely.
Result:
Completed a fully offline voice chatbot using DeepSeek + Ollama.
System can:
Wake on command
Recognize speech input
Generate a response with the local LLM
Speak the response back with TTS
Return to listening mode or shut down on command
A personal milestone project—my first major Python build—that proved I could integrate AI, audio, and interaction locally without relying on internet servers.
2012 Mac Pro Restoration
Click the photo to find out more!
Problem:
I inherited an old 2012 Mac Pro that was outdated and too slow for its original use. Instead of retiring it, I wanted to restore and modernize the machine—maxing out its hardware and turning it into a triple-boot workstation for tinkering with different operating systems, programming, and school work.
Method:
Physical Modifications:
Cleaned and dusted the exterior and interior chassis.
Discovered it was already configured with dual CPUs, 8 RAM slots, and a GPU.
Reapplied thermal paste to refresh cooling.
Upgraded RAM by installing 8 × 16 GB DDR3 1333 MHz sticks → maxing out memory capacity.
Software Modifications:
Installed two new 256 GB SSDs alongside the existing two 512 GB Apple SSDs.
Wiped all four drives for a clean setup.
Used OpenCore-Patcher to bypass Apple’s software block and install macOS Sequoia on one Apple SSD (primary boot).
Configured second Apple SSD as a backup.
Used Boot Camp Assistant from Sequoia to install Windows 11 directly (no USB needed).
Installed Linux onto one of the new SSDs using Balena Etcher.
End result: macOS + Windows + Linux all locally available, each with its own dedicated SSD.
Result:
Successfully restored a 2012 Mac Pro into a triple-boot workstation.
Hardware maxed out (128 GB RAM) and given new life with SSD storage.
Runs macOS Sequoia, Windows 11, and Linux seamlessly.
Machine is now reliable for programming, experimenting with multiple operating systems, and daily school work.




