My experience of deploying first LLM locally

Adithya Thatipalli
4 min readFeb 25, 2024

--

It's been a while wrote something. But it's good to be back again.

Few days back, I was working on something which is a tedious task takes lot of time because its new to me, don’t have any systems in place to automate. it. Then even if I’m starting it, I want to automate the process.

I can see only 2 ways.

One way is to do the work and put existing systems in play.
Another one is to subscribe to AI tools which can help me.

But the desi inside me thinking whether I should pay for it when I am not seeing anything in return.

Can’t we get it for free?

Thats when I thought about open-source models. Every day I see news of so many open-source models coming to the market, why I cant just deploy one model and use it for my purposes.

But another question came into my mind?

Actually, couple of more,

Will it work on my machine?
How to deploy it?
What is the process?
Which one I should start with?

While thinking at same time, I started searching in google and YouTube for light. Everyday I see same creators, same websites but this time couple of them looks like the guiding lights to me which will drive me.

Started with Hugging Face as start to understand the list of models, sizes, types, which can be useful to me. I was watching videos of people who already did this walkthrough, so it did not take time for me to pick a model.

First question i.e., Which model to deploy is solved.

Next one,

How to deploy?

I am not a big fan of docs following so I was half aware that docs are available, and I can be able to use it.

But leaving all of it, I saw options on how to proceed next.

I saw I can Train it, deploy it or use it in Transformers. I don’t have any idea on how each one of it words apart from having basic understanding. Then I got to know that there are multiple options to deploy.

Again, multiple options

Now we have two routes.

One is to go ahead and deploy it in cloud like AWS and Azure or deploy it on our local system using APIs.

I have two issues here.

I don’t have credits in cloud solutions to deploy and test there.
Even if I test there, I can’t use it for a long time.

So, I have to figure out a way to deploy in my host. Now my searches are filtered deeper, I have to find videos specifically to this way.

I thought I was progressing, but I circled back to square one but from ground floor to first floor.

After watching a couple of videos, I understood that I need python and couple of other prerequisites to setup the environment to deploy the model.

Coding is not my area of expertise even after having mainstream coding AI assistance. I need some handholding while understanding and modifying the code. I can take effort to reverse engineer and understand what It does but coding is something gibberish to me.

But I don’t have any option.

Enabled split screen, opened a couple of repos and videos, started taking one by one. Installed python, Pytorch, package managers, environments, dependencies and everything else required.

(As explained in videos) Still need to read about each major dependency.

One of the key observations I noticed is that even in local host deployment there are different ways to setup the environment. I saw a couple of walkthroughs where they spoke about creating a docker image and creating a container setup.

For now, I thought to rest it. I went step by step down into the rabbit hole (For me), even though its simple for me Its new for me. After a couple of failed attempts, fixing errors I was able to deploy one small speech to text model.

This was different what I wanted to deploy because it's easy to start and to gain confidence.

Its a long way to go but I felt good that its good start to understand more in depth in this space.

Thanks for reading :)

--

--

Adithya Thatipalli

Security Engineer by Day, Cloud and Blockchain Learner during Night