top of page

Brass Tacks: What You Need to Get Started with Auto-GPT

Updated: May 3

Setting Up and Configuring Auto-GPT | The Practical Catalyst | Michelle McGough

This post is the second in the series about trying to get Auto-GPT to generate BigFix fixlet content for new vulnerabilities.

This post is about how to get the resources you need to get yourself set up with Auto-GPT. If you're not interested to do that: life is short, skip this article!


Here's a list of resources I have found helpful:

How I Got Started

A former colleague and longtime friend Dale Hattaja heard me raving about how learning (and eventually teaching) AI prompting sparked a passion I hadn't felt for technology since BigFix. Since Dale and I share a BigFix connection, he mentioned Auto-GPT and suggested I check it out. I saw a demo on youtube - it's embedded in the first linked resource above - and that inspired me to get started. I probably watched that demo 5x at 50% speed to grasp the concept of to how Auto-GPT works. From there I searched for a good set of instructions (first link in the resources above) and got going.

Running on macOS Ventura

I'm using a 2018 Macbook Air thats been upgraded to Ventura. For the requirements, Python v3 was already installed with the OS. I added the GitHub Desktop App, and created a new OpenAI API just for Auto-GPT so I could track the spend.

I started by cloning the Auto-GPT repo but issues with it followed by a tour through the GitHub issues make me realize quickly that I'd do better running the stable version. So I...

  • went to GitHub and navigated to and downloaded the latest stable release of Auto-GPT

  • extracted it

  • opened Terminal from the path where Auto-GPT was extracted

  • installed the requirements

pip3 install -r requirements.txt