The QPath Blog


ChatGPT-4: first approach to its use, scope and limitations in quantum algorithms

Authors

Ezequiel Murina
aQuantum Algorithms Team Leader

José Ignacio García
aQuantum Algorithms Specialist

Martín Hurtado Heredia
aQuantum Algorithms Specialist

ChatGPT was launched in November 2022. This product offers a conversational interface equipped with an artificial intelligence that seeks to enhance the user experience in terms of query interpretation, response accuracy and quality of the information provided. The features it provides invite discussion about its usefulness as a work tool in technical tasks related to the development of programming code or even in analytical tasks related to algorithms. This post provides a brief description of what ChatGPT is and a first approach to the possibilities of its use, scope and limitations in the area of quantum algorithms in version 4 of the software (ChatGPT4) used for the tests that give rise to this paper.

ChatGPT is developed by OpenAl. The company was created in 2015 with the ambition to offer software products in the area artificial intelligence (Al) that bring great benefits to humanity, according to the self-definition published on its corporate website[1]. The products that OpenAI develops are:

·       GPT:  language model based on Deep Learning and with which its other products work.

·       ChatGPT: conversational interface that will be described later in this article.

·       DALL-E: digital image generator from a text description entered by the user as input.

·       OpenAI Five: software trained as a player of an online strategy video game called Dota 2.

·       OpenAI Codex: an Al specialized in programming code development and debugging.

 

What is ChatGPT?

The term ChatGPT is an acronym for Chat Generative Pre-Trained Transformer. It refers to a conversational interface or chatbot, i.e., software that provides automatic answers to questions posed by a user in the context of a dialog or chat. Figure 1 shows the product dashboard which, as can be seen, has a minimalist design. In this case the image corresponds to the version for mobile devices.  On the right is the window where the chat takes place: the user enters his question, the algorithm interprets it and provides an answer. On the left is a window with buttons to manage chat history, log-outs, subscriptions and other queries. 

Figure 1: ChatGPT dashboard 

From a technical perspective we can define ChatGPT as a neural network with Deep Learning architecture exposed to training. The latter consists of 3 stages: 

1.     Supervised learning in which a set of questions/answers written by a user is fed to the model.

2.     A reinforced learning in which the model is provided with a set of questions and answers given by the algorithm and which have been previously catalogued according to the quality of the answer.

3.     A reinforced learning in which the classifications from the previous step are used to create reward models in which the fit is refined.

The data provided in ChatGPT training are massive. They serve to perform a tuning of the neural network parameters, i.e., the statistical weights and biases from which the values of the neurons organized in different layers are instantiated. A general schematic of the atomic part of the network is shown in Figure 2. Circles indicate neurons arranged in layers. The straight lines indicate the neurons involved in instantiating the value of the neuron of the next layer. pnij and bni are the weights and bias of the nth layer, respectively.

figure_2

Figure 2: Neural network similar to the kernel of the Deep Leorning network of the ChatGPT algorithm.

The order of magnitude of the number of ChatGPT-4 parameters  that make up the network is about 100 billion[2], which makes the  algorithm one of the largest in number of parameters within the area of language models.   The statistical weights  are adjusted based on keywords or tokens extracted from documents that make up the training  datasets . The size of these datasets, measured in number of tokens, as well as the source of the tokens, are shown in Figure 3. The most massive source is Common Crawl, a repository created by AWS that stores texts from web pages. Common Crawl and Webtext are internet text repositories commonly used to train language models. Book1 and Books2 are large book repositories available online (following table 2.2 of reference [2]).

Figure 3: Size of datasets in number of tokens and proportion of participation in the training of the ChatGPT Deep Learning network.

 

  ChatGPT-4 as a tool for the development of quantum algorithms

 The launch of ChatGPT-4 on March 14, 2023 has generated great repercussion and debate in public opinion regarding its use in areas as diverse as education[3], health[4], finance[5], art[6], e-commerce[7], etc. Likewise, as usually happens with new technologies, it has suffered preventive prohibitions in the field of some companies and institutions due to the legal vacuum on copyright and quality of the information offered[8][9], and even environmentalist criticisms regarding the efficient consumption of energy or the carbon footprint it would produce[10].

As far as we have ascertained, there is still no comprehensive discussion related to the use of ChatGPT in quantum computing. With the aim of contributing to it, we share below our thoughts on the results we have obtained by employing ChatGPT-4 in the area of quantum algorithmics. The work performed was carried out using the Free Research Preview version, which can be accessed through the OpenAl website https://chat.openai.com/chat.

As indicated above, the ChatGPT algorithm is trained with massive data extracted from web pages or online repositories. Therefore, the quality of the answer in a specific area of knowledge is determined by the richness and volume of the data source referring to that area. Since the latter is determinant in the quality of the answers to be received on the questions addressed in the chat, as mentioned in the message shown in Figure 4, OpenAl warns on the chat home page with complete clarity about the limitations of ChatGPT at this time, with special emphasis on knowledge about events after 2021.

 

Figure 4: Initial user prompts displayed when logging into the ChatGPT website.

Beyond the relative novelty of quantum computing in the media (something that is not the case in the scientific literature), we have found that, given that quantum computing is a hot topic about which much recent material can be found on the Internet, ChatGPT-4 has the capacity to provide information about this area and, in particular, about quantum algorithms. In the following, it is shown to what extent it is feasible to extend that merely informative capability to the possibilities of generating programming code that implements quantum algorithms. The tests were performed with 2 of the most widespread quantum technological approaches: quantum annealing or adiabatic programming and quantum circuits or quantum gates.

 

ChatGPT-4 for annealing-based algorithm implementation

The first thing that is detected when consulting ChatGPT-4 about annealing algorithms is that the implementation at code level, by default, is done with the import of the OCEAN SDK from the manufacturer D-Wave. However, if explicitly requested, ChatGPT-4 can provide an implementation using technology from other manufacturers, such as Fujitsu, for which it is necessary to import the pyDA library from its Digital Annealer platform, as shown in Figure 5.

figure_5_En

Figure 5: Query for an annealing code with Fujitsu’s software technology.

To illustrate the rights and wrongs of ChatGPT-4 in implementing an annealing algorithm, we address the Knapsack Problem, a classic optimization problem.

When making the query “I propose another problem, the knapsack problem, which from previous conversations I have already seen that you know it. Can you come up with Python code that solves this problem using annealing? “, to which ChatGPT-4 responds with the code shown in Figure 6.

In general, it is observed that the codes returned as answers have a correct global structure in terms of syntax, respect for the Python style, explanatory comments and modularization of the different stages of the implemented algorithm. However, we also detect faults at the logical-mathematical level, which we will describe in detail below.

As for the successes of the code generated when implementing the the Knapsack Problem indicated in Figure 6, the following are worth mentioning:

·       Creation of the data input for an annealing solver: generates the general structure of the QUBO (Quadratic Unconstrained Binary Optimization) matrix using a Python dictionary as data type consistent with best practices;

·       Import of all modules and correct creation and management of objects necessary to execute the code in the solvers of the indicated manufacturer.

On the other hand, some failures are also detected and are shown in detail in Figure 6:

·       Sometimes there is a misinterpretation of the optimization problem being queried;

·       Errors in the mathematics of the implemented algorithm, both in the  constraints and in the  coding of the mathematical variables of the optimization problem;

·       Inclusion of incorrect terms in the QUBO matrix;

·        Sometimes there are errors in the index references of some arrays, mainly in the code block that shows the user the solution to the problem.

Figure 6: Code generated by ChatGPT to solve the Knapsack Problem using the annealing method. 

The verification of the implementation of the Knapsack Problem was performed with QuantumPath®, the platform for professional development of quantum algorithms and quantum software solutions that we use in our work with which, as can be seen, the solution can be implemented in a direct, simple and scalable way. Figure 7 shows how the input parameters are entered, while Figure 8 shows the Hamiltonian to be minimized.

Figure 7: Q Assets Compositor® for Annealing. Parameter definition instance for the Backpack Problem.

 

figure_8

Figure 8: Q Assets Compositor® for Annealing. Instance of definition of the Hamiltonian for the Knapsack Problem.

 

ChatGPT tests for implementation of gate-based algorithms or quantum circuits.

 In terms of ChatGPT-4 failures and successes when exposed to quantum gate-based algorithm implementation, the picture is similar to what happens with annealing technology. However, ChatGPT-4 manifests a better performance probably because the material available online is vaster and quantum programming in the literature is mostly addressed from the quantum gate technology approach.  

In this case the best results are obtained when dealing with primitives (set of gates that perform a specific computation) already standardized as in the case of the Fourier Transform. The results get worse as we increase the complexity of the algorithm or we deal with more specific algorithms for which there is limited information in the literature.

The first thing that is detected from a technical point of view when querying ChatGPT-4 for gate algorithms is that the implementation at the code level is done by default with the import of IBM’s Qiskit SDK. However, if explicitly requested, as shown in Figure 9 , it can provide an implementation using third-party technology such as Xanadu’s PennyLane and Riggeti’s Forest.

Figure 9: Query for the implementation of a quantum gate algorithm with a different SDK than Qiskit 

Another experiment was to evaluate ChatGPT-4 in the implementation of the Grover and Bernstein-Vazirani algorithms. It is plausible to expect a better performance with the former, since there is considerably more information available online about Grover’s algorithm than about the latter.

When querying, “Can you implement the Grover algorithm with three qubits and a marked state? “, the code returned as a response is shown in Figure 10. The implementation that ChatGPT-4 makes is generally acceptable: the code is well structured and the encoding of the quantum operators is correct.

However, the code returned by ChatGPT-4 fails to estimate the number of iterations to achieve good solution accuracy. The impact of this failure may be greater or lesser depending on the specific case being treated. It performs only one iteration, when it should perform 2 according to the analytical formula for estimating the number of iterations[11]. Explicitly consulting the formula, although it states it correctly, it gives as output a code with enough iterations (2 in this case) only in the section of the circuit that encodes the diffuser operator and not in the part of the circuit that encodes the marker or oracle of the algorithm. If done correctly, there should be the same number of iterations in both parts. Figure 10 shows the analyzed code.

 

 Figure 10: Failures (indications in red) and hits (indications in green) of the code generated by ChatGPT to implement Grover’s algorithm with a marked state.

Figure 11 shows the verification of the QuantumPath® implementation, in which Grover’s algorithm can be implemented in a direct, simple, and scalable manner.

Figure 11Q Assets Compositor® for Quantum Circuit.  Quantum circuit definition instance implemented by Grover’s algorithm

The next case to be shown is the implementation of the Bernstein-Vazirani algorithm. It allows to know a binary string that is part of the definition of an unknown function. Figure 12 shows the code generated by ChatGPT-4, which implements the algorithm for the case of a string ‘011’. The mathematical bugs in the code lead to an erroneous result in which the searched string does not appear but the following ones: ‘001’ and ‘000’. 

Figure 12 Failures (red indications) and successes (green indications) of the code generated by ChatGPT to implement the Bernstein-Vazirani algorithm for the string ‘011’.h

Figure 13 shows the verification of the implementation of the Bernstein-Vazirani algorithm with QuantumPath®.  

Figure 13: Q Assets Compositor® for Quantum Circuit. Quantum circuit definition instance implementing the Bernstein-Vazirani algorithm 

Conclusions

 After the experimentation shared above, the first question to be answered is: does ChatGPT-4 have the ability to implement a code-level quality quantum algorithm? As shown above, with the current version, it does not.

We think it does because, on the one hand, it is difficult in algorithms in general, and even more so in quantum algorithms, to decouple the conceptual machinery to perform a computation from the programming code that materializes the implementation. To cite just one example, as already mentioned, ChatGPT-4 fails to estimate the number of iterations for Grover’s algorithm, but the diffuser operator and the oracle are well encoded in sequences of gates. On the other hand, the implementation offered by ChatGPT-4 is sensitive to the information with which it was trained and, therefore, to how well documented online the algorithm in question is.

Another valid question for reflection on the experiments performed is: is ChatGPT-4 useful as a technical support tool for the implementation of quantum algorithms? According to the exercises performed with the current version, we believe that for very simple cases yes, i.e., it could be valid for scenarios of well-documented algorithms, with ample literature available online and of low conceptual complexity (algorithms with few stages, with a low number of mathematical variables, with no need for estimates in terms of number of iterations or magnitude of hyperparameters).

In summary, based on the experiments we have performed, some of which we have shared in this post, we consider that ChatGPT-4, at this time, is not a suitable tool for the implementation of algorithms that solve industry or real-world problems, where dimensionality scales and documentation is probably scarce due to ad-hoc solutions for specific customers. Notwithstanding the limitations of the product in its current version, many of which we consider transitory given the initial state of delivery of the model on quantum algorithm, the potential detected to automate some tasks or stages of development of quantum algorithms is really promising. 

Bearing in mind how complex it´s to develop customized quantum algorithms (and that these tasks usually have a high level of confidentiality, and even industrial secrecy), we envision a long and complex process of training this type of neural networks so that services of this type are able to offer specific high-quality quantum algorithms for each activity or business. In fact, in our opinion, the qualitative leap of these neural networks to successfully address the complex and particular processes of real life will be linked to their treatment with quantum artificial intelligence. If we add to this the fact that the practical application of this algorithm will have to be integrated into software systems that make it possible for quantum software to be truly industrially ready, everything leads us to believe that quantum human intelligence has an excellent projection in this technological race.

Therefore, we will keep an eye on its evolution, continue experimenting and, consequently, sharing the positive advances of ChatGPT for the development of quantum algorithms.

References

[1] https://openai.com/about (last checked on March 2023)

[2] Language Models are Few-Shot Learners

[3] https://blog.linclearning.com/es/chat-gpt-y-el-futuro-de-la-educacion (last checked on March 2023)

[4] https://www.infobae.com/tecno/2023/02/12/esta-herramienta-usa-chat-gpt-para-dar-ayuda-psicologica/ (last checked on March 2023)

[5] https://citywire.com/es/news/chatgpt-el-sabelotodo-tambi%C3%A9n-de-las-finanzas-les-tiende-la-mano-a-los-asesores-financieros/a2406841 (last checked on March 2023)

[6] https://medium.com/@tonylab_net/revolutionizing-ai-art-with-chat-gpt-4s-advanced-conversational-capabilities-c11bf10caeef (last checked on March 2023)

[7] https://ecommerce-news.es/el-chatgpt-y-su-revolucion-en-los-negocios/ (last checked on March 2023)

[8] H. Holden Thorp (2023). ChatGPT is fun, but not an author. Science, Journal Article. VOL 379 ISSUE 6630, PG – 313-313. https://doi.org/10.1126/science.adg7879

[9] https://meta.stackoverflow.com/questions/421831/temporary-policy-chatgpt-is-banned (last checked on March 2023)

[10] https://towardsdatascience.com/the-carbon-footprint-of-chatgpt-66932314627d (last checked on March 2023)

[11] Introduction to Quantum Computing, Kaye, Lafflame, Mosca, Oxford University Press (2007), pp163.7