The tech world is full of little gems, and one of them is the wtf command from OpenBSD. For the uninitiated, wtf stands for “what the f***” and is a lighthearted utility designed to look up the meaning of acronyms. It has its roots in UNIX culture, where brevity and humor often intersect with practical tooling. The command is simple: you type wtf followed by an acronym, and it tells you what that acronym means. For example:

$ wtf is TCP  
TCP: Transmission Control Protoco

Despite its humorously irreverent name, wtf has been a practical tool for developers, sysadmins, and curious learners for decades. However, as the world moved beyond static acronym dictionaries, I began thinking: what if I reimagined wtf for the modern age, leveraging AI to answer not just acronyms but anything you might wonder about?

The Idea: A Modern wtf

The original wtf was lightweight and to the point, and I wanted to preserve that spirit. But in today’s landscape, we often have questions that go beyond a single acronym. For example:

wtf is quantum computing?

wtf is a neural network?

wtf is Kubernetes?

While Google can provide answers, navigating search results or skimming through articles can be time-consuming. I wanted to create a command-line tool that’s as simple as the original wtf, but with the power to give you concise, well-formatted answers to almost any query, complete with references.

Enter OpenAI

While people who know me, knows that I am not really too impressed by the current state of AI, I still decided to use OpenAI’s GPT model as the “brain” behind my modern wtf. GPT’s ability to synthesize information from vast datasets and provide concise, human-like answers made it a usable fit.

Building the New wtf

I started with a simple Python script that can be run from the command line. The basic idea is straightforward: you type a query in the format wtf is …., and the script sends the query to OpenAI’s API. The API responds with a well-crafted answer, and the script formats it neatly, including a list of relevant references.

Here’s how it works:

  1. You ask a question:
wtf is quantum computing
  1. The script queries OpenAI’s GPT model:
    The script sends the query in a prompt to OpenAI, asking for a concise explanation and a list of references.
  2. The result is (simple) formatted:
Searching for: quantum computing
==================================================
Answer to: quantum computing

Quantum computing is a type of computing that uses quantum-mechanical phenomena, such as superposition and entanglement, to process information. Unlike classical computers, which use bits as the smallest unit of data (0 or 1), quantum computers use quantum bits or qubits. Qubits can exist in multiple states simultaneously, allowing quantum computers to perform complex calculations much faster for certain problems.

Key applications include cryptography, optimization problems, drug discovery, and simulations of quantum systems.

References:
  1.  "Quantum Computing: An Overview" - IBM Quantum (https://www.ibm.com/quantum-computing/)
  2.  "What is Quantum Computing?" - Quantum Inspire (https://www.quantum-inspire.com/)
  3.  "Understanding Quantum Computing" - MIT Technology Review (https://www.technologyreview.com/)
==================================================

The Code

Here’s the code for my modernized wtf. Just add the code below to a file called wtf and place it in your PATH. Make sure to have the following things in place:

Prerequisites

  1. Install OpenAI Python Library: pip install openai
  2. Get an API Key: Obtain your OpenAI API key from OpenAI’s website.
#!/usr/bin/env python3
# -----------------------------------------------------------
# Modernized wtf command using OpenAI's ChatGPT
#    By Kim Schulz <kim@Schulz.dk>
#    v1.1
# -----------------------------------------------------------

import sys
import time
import threading
import itertools
import openai

VERSION = "1.1"
# Configuration
OPENAI_API_KEY = "your_openai_api_key_here"  # Replace with your API key
MODEL = "gpt-4o"
# MODEL = "gpt-4"
openai.api_key = OPENAI_API_KEY

class Spinner:
    def __init__(self, message="Searching..."):
        self.spinner = itertools.cycle(["|", "/", "-", "\\"])
        self.message = message
        self.stop_running = False
        self.thread = threading.Thread(target=self.run)

    def run(self):
        while not self.stop_running:
            sys.stdout.write(f"\r{self.message} {next(self.spinner)} ")
            sys.stdout.flush()
            time.sleep(0.1)

    def start(self):
        self.thread.start()

    def stop(self):
        self.stop_running = True
        self.thread.join()
        sys.stdout.write("\r" + " " * (len(self.message) + 3) + "\r")  # Clear line
        sys.stdout.flush()


def query_chatgpt(prompt):
    """
    Query OpenAI's ChatGPT API and return the result.
    """
    try:
        spinner = Spinner("Searching...")
        spinner.start()
        # Make the API call
        response = openai.chat.completions.create(
            model=MODEL,
            messages=[
                {
                    "role": "system",
                    "content": "You are a helpful assistant that answers questions succinctly.",
                },
                {"role": "user", "content": prompt},
            ],
            max_tokens=500,
            temperature=0.7,
        )
        # Extract and return the content of the assistant's reply
        return response.choices[0].message.content..strip()
    except openai.APIConnectionError as e:
        print("The server could not be reached")
        print(e.__cause__)  # an underlying Exception, likely raised within httpx.
        sys.exit(e.status_code)
    except openai.RateLimitError as e:
        print("You have asked too many wtf's too fast. Back off a bit!")
        sys.exit(e.status_code)
    except openai.APIStatusError as e:
        print("The network returned an error:")
        print(e.status_code)
        print(e.response)
        sys.exit(e.status_code)
    finally:
       spinner.stop()


def main():
    if len(sys.argv) < 3 or sys.argv[1].lower() != "is":
        print("=" * 50)
        print("Modern wtf command v" + VERSION)
        print("   by Kim Schulz <kim@Schulz.dk>")
        print("=" * 50)
        print("Usage: wtf is <query>\n\n")
        sys.exit(1)

    # Combine query arguments
    query = " ".join(sys.argv[2:])
    prompt = (
        f"What is '{query}'? Please provide a clear, concise explanation followed by a brief list of reliable references "
        "to support your answer. Format references at the end in a neat list."
    )
    print(f"Searching for: {query}\n")

    # Perform the query
    result = query_chatgpt(prompt)

    # Format the output
    print("\n" + "=" * 50)
    print(f"Answer to: {query}")
    print("=" * 50)
    print(result)
    print("=" * 50)


if __name__ == "__main__":
    main()

Notes

  • Replace "your_openai_api_key_here" with your actual OpenAI API key.
  • The script ensures formatted references are included at the end of the output, leveraging GPT’s ability to generate concise lists.
  • Adjust the max_tokens and temperature for desired response length and creativity.

How to Run

  1. Make the Script Executable: chmod +x wtf
  2. Run the Script: wtf is quantum computing

But why??

Recreating wtf with AI is more than a technical exercise—it’s a reminder of how far we’ve come in combining simplicity with power. The original wtf was a product of its time, offering quick help for acronym overload. Today’s version embraces the complexities of modern knowledge while keeping things as straightforward as possible.

For me, this project wasn’t just about building a tool; it was about reimagining how we access knowledge in an era where AI can serve as an on-demand, interactive reference guide. And in the spirit of OpenBSD’s wtf, it’s a reminder that even the smallest tools can evolve to meet the needs of their time.

Give it a try. What’s your “wtf”?