Warning: Please comply with the OpenAI Content policy while using this extension.
To use this extension, you must have an OpenAI API account, obtain your API key, and add it during the extension configuration screen.
You can find your API key in "View API keys" under your profile settings.
The interface of the extension follows the interface of the OpenAI Playground.
Send Prompt: send entered prompt to GPT3.
Copy Answer to Clipboard: copy the last answer from GPT3 to the system's clipboard.
Load an Example: load an example of GPT3 usage into your prompt. Examples are taken from the OpenAI website.
Check Examples on OpenAI Website: open a browser and go to OpenAI Example Page.
Change API Key: open Raycast extension preference where you can change the API key.
You can set different parameters for the AI model:
AI Model: type of the model you want to use.
text-davinci-003 is the most powerful one for now.
Temperature: controls randomness of the AI model. The lower it is, the less random (and "creative") the results will be.
Maximum Tokens: limit for the number of tokens the AI model will generate in the response. You can see a live preview of how many tokens your prompt has underneath the
Top P: controls response diversity and is similar in effect to the
Frequency Penalty: controls how repetitive responses can get. Increasing the parameter lowers the chance of repetition.
Probability Penalty: controls how novel responses can get. Increasing the parameter raises the chance for novel answers.
OpenAI API charges based on the number of total tokens, i.e., the number of tokens you submit in the prompt plus the number of tokens you got in response. Current prices are listed on Open AI Pricing page.
Tokens represent the length of your prompt. For English text, 1 token is approximately 4 characters or 0.75 words. As a point of reference, the collected works of Shakespeare are about 900,000 words or 1.2M tokens.
Extension dynamically calculates the number of tokens your prompt has based on open source GPT3 Encoder library. After the answer has been received, the
Prompt token count is updated directly with the token usage from the OpenAI API response and represents an accurate count of tokens OpenAI is charging you.