On this article, we’ll see the steps concerned in constructing a chat utility and an answering bot in Python utilizing the ChatGPT API and gradio.
Growing a chat utility in Python supplies extra management and suppleness over the ChatGPT web site. You may customise and lengthen the chat utility as per your wants. It additionally enable you to to combine together with your present methods and different APIs.
What’s Gradio?
Gradio is a Python library that makes it simple to create customizable consumer interfaces for predictive and generative fashions, permitting you to shortly construct interactive purposes without having in depth front-end improvement expertise.
See under a few of the advantages of utilizing Gradio.
- Fast Mock-up: By utilizing Gradio, you possibly can shortly iterate and experiment with completely different mannequin configurations and consumer interfaces with out writing in depth code.
- No Entrance-Finish Improvement Expertise Required: You do not must be an knowledgeable in front-end improvement to create interactive purposes with Gradio. It solves the complexities of constructing consumer interfaces, permitting you to deal with the performance of your fashions.
- Multi-Enter and Multi-Output Assist: Gradio helps fashions with a number of inputs and outputs, making it versatile for a variety of AI purposes.
- Reside Suggestions: Gradio supplies real-time suggestions, permitting customers to see the outcomes and work together with the mannequin simply.
- Sharing and Deployment: Gradio makes it easy to share and deploy your internet apps.
The right way to get ChatGPT API
To get began, the primary and most necessary step is to enroll utilizing this hyperlink: platform.openai.com. You may simply join through the use of your present Google or Microsoft account. When you’re signed up, you will want to get a secret API key to make use of the API. It’ll look one thing like under. Be certain to repeat your API key and preserve it for future reference.
sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
After finishing the sign-up course of, you’ll obtain a free $5 grant to check the ChatGPT API. This grant will expire after 3 months. As soon as the grant is exhausted, you may be charged $0.0015 for each 1000 tokens used for GPT-3.5. Tokens are basically counted as phrases. It is necessary to maintain your API key confidential and never share it with others, as you may be accountable for any prices incurred by their utilization.
For GPT-4 fashions with 8,000 context lengths (e.g. gpt-4), the pricing is comparatively increased than GPT-3.5 mannequin. Pricing of this mannequin is as follows:
- $0.03/1000 immediate tokens
- $0.06/1000 sampled tokens
For GPT-4 fashions with a better context size of 32,000 (e.g., gpt-4-32k), the pricing is double that of fashions with an 8K context size.
- $0.06/1000 immediate tokens
- $0.12/1000 sampled tokens
Demo : ChatGPT Clone
Please see the GIF picture under, which reveals how ChatGPT appears to be like and works.
Set up the required packages
Be certain to put in these 3 python packages gradio
openai
kivy
. The package deal kivy will enable you to to repeat ChatGPT output to clipboard on the press of a button.
pip set up gradio openai kivy
Python code : ChatGPT Clone
import gradio as gr import openai from kivy.core.clipboard import Clipboard immediate = "Ship a message" def chat(immediate, apiKey, mannequin): error_message = "" strive: response = openai.ChatCompletion.create( mannequin = mannequin, api_key = apiKey, messages = [{'role': 'user', 'content': prompt}], temperature = 0.7 ) besides Exception as e: error_message = str(e) if error_message: return "An error occurred: {}".format(error_message) else: return response['choices'][0]['message']['content'] def chatGPT(userMsg, historical past, modelType, apiKey): historical past = historical past or [] comb = listing(sum(historical past, ())) comb.append(userMsg) immediate=" ".be part of(comb) output = chat(immediate, apiKey, modelType) historical past.append((userMsg, output)) return historical past, historical past def lastReply(historical past): if historical past is None: outcome = "" else: outcome = historical past[-1][1] Clipboard.copy(outcome) return outcome with gr.Blocks(theme=gr.themes.Monochrome(), css="pre {background: #f6f6f6} #submit {background-color: #fcf5ef; shade: #c88f58;} #cease, #clear, #copy {max-width: 165px;} #myrow {justify-content: middle;}") as demo: gr.Markdown("""<middle><h1>🚀 ChatGPT</h1></middle>""") with gr.Row(): with gr.Column(scale=0.5): modelType = gr.Dropdown(selections=["gpt-3.5-turbo", "gpt-4"], worth="gpt-3.5-turbo", label="Mannequin", information="Choose your mannequin sort" ) with gr.Column(scale=0.5, min_width=0): apiKey = gr.Textbox(label="API Key", information="Enter API Key", traces=1, placeholder="sk-xxxxxxxxxxx") chatbot = gr.Chatbot().fashion(top=250) state = gr.State() with gr.Row(): with gr.Column(scale=0.85): msg = gr.Textbox(show_label=False, placeholder=immediate).fashion(container=False) with gr.Column(scale=0.15, min_width=0): submit = gr.Button("Submit", elem_id="submit") with gr.Row(elem_id="myrow"): cease = gr.Button("🛑 Cease", elem_id="cease") clear = gr.Button("🗑️ Clear Historical past", elem_id="clear") copy = gr.Button("📋 Copy final reply", elem_id="copy") clear.click on(lambda: (None, None, None), None, outputs=[chatbot, state, msg], queue=False) submit_event = submit.click on(chatGPT, inputs=[msg, state, modelType, apiKey], outputs=[chatbot, state]) submit2_event = msg.submit(chatGPT, inputs=[msg, state, modelType, apiKey], outputs=[chatbot, state]) cease.click on(None, None, None, cancels=[submit_event, submit2_event]) copy.click on(lastReply, inputs=[state], outputs=None) demo.queue().launch(inbrowser=True, debug=True)
Options of ChatGPT Clone
See the important thing options of ChatGPT Clone
- Copy final reply: Customers can simply copy the earlier response generated by ChatGPT, making it handy for referencing or sharing.
- Clear Historical past: It presents the choice to clear the dialog historical past, enabling customers to start out recent.
- Capability to cease processing of working code: In case of executing code throughout the chat, ChatGPT Clone permits customers to halt the processing if wanted. It’s helpful to cease processing when it’s working for lengthy and never returning something.
- Simple switching between mannequin sorts: ChatGPT Clone permits you to change between GPT-3.5 and GPT-4.
The right way to allow ChatGPT’s dialog reminiscence
By default, ChatGPT API doesn’t retain the reminiscence of earlier conversations. Every API request is handled as a separate chat session, so when ChatGPT responds to your present question, it doesn’t recall any data out of your earlier questions.
This is usually a disadvantage if you wish to enhance ChatGPT’s responses by asking additional questions. It could additionally enable you to to design the prompts given to ChatGPT. To make ChatGPT bear in mind prior conversations, that you must present the context each time you work together with it. Refer the python code under to see the way it works.
import openai import os os.environ['OPENAI_API_KEY'] = "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" openai.api_key = os.getenv("OPENAI_API_KEY") chatHistory = [] def chat(immediate, modelName="gpt-3.5-turbo", temperature=0.7, top_p=1): params = { "mannequin": modelName, "temperature": temperature, "top_p": top_p } chatHistory.append({"function": "consumer", "content material": immediate}) response = openai.ChatCompletion.create( **params, messages=chatHistory ) reply = response["choices"][0]["message"]["content"].strip() chatHistory.append({"function": "assistant", "content material": reply}) return reply
chat("2+2") 4 chat("sq. of it") 16 chat("add 3 to it") 19
