Test code generation from Swagger and PlantUML with Azure OpenAI Service using PromptGenerator method
PromptGenerator メソッドによって最適化されたプロンプトを使って Azure OpenAI Service で OpenAPI と PlantUML からテストコードを生成する
前回、Azure OpenAI Service を使って OpenAPI と PlantUML からテストコードを生成する取り組みについて説明しました。
ここで出来ていなかったことの一つとして、プロンプトのチューニングがあります。
今回、ChatGPT にプロンプトを生成させる PromptGenerator というメソッドを用いて最適化されたプロンプトを生成し、そのプロンプトを用いて前回同様のアプローチでテストコードを生成して、その内容を評価したいと思います。
ゴール
- PromptGenerator メソッドを用いて、前回のプロンプトに代わる最適化されたプロンプトを生成する。
- 前項のプロンプトを使ってドキュメントからテストコードを生成し、その内容を評価する。
免責
- 本記事の内容は、執筆時点のものです。LLM の変化やゆらぎもあるため、再現性は保証されません。
- 本記事の内容は検証レベルのものです。完全な手法に関する情報を提供するものではありません。
- 本記事で使用する PlantUML は、細部まで作り込んでいるわけではありません。細かい部分で間違いがある可能性があります。
PromptGenerator について
- オリジナルは Bret Littlefield 氏によるこちらの投稿です。
- 日本語による解説等については、以下の記事を参照してください。
プロンプトによるプロンプトの生成
ChatGPT (GPT-4) で以下のやりとりを行い、プロンプトを生成しました。
プロンプトの生成回数は 15 回で、その中には前回挙げた改善点に関する内容も含まれています。
初回プロンプト
1 | I want you to be my prompt engineer. Your goal is to help me craft the best prompt for my needs. That prompt will be used by you in ChatGPT. |
What should the prompt be about?
回答
It’s about test code generation in Python from materials such as OpenAI Specifications, PlantUML sequence diagrams, PlantUML class diagrams, and so on.
生成されたプロンプト #1
a) Revised Prompt:
How can one generate test code in Python using materials like OpenAI Specifications, PlantUML sequence diagrams, and PlantUML class diagrams?b) Suggestions:
- Briefly explain the purpose of using these materials for test code generation.
- Describe the process of converting diagrams and specifications into Python test code.
- Mention any libraries or tools that can be used to facilitate this process.
c) Questions:
- Are there any specific tools or libraries you would like to know more about for this purpose?
- What is your level of expertise in Python and these materials?
- Do you have any specific use case or project in mind for this prompt?
回答
As my answer for your question 1, we should use unittest module in the test code.
As my answer for your question 2, I am an expert of Python programmer. I have experienced Python for more than a decade.
As my answer for your question 3, the use case of this prompt is enterprise software development. They need to create not only software based on test codes but also documents that explain the software. They will write test codes after writing the documents such as OpenAPI Specifications, PlantUML sequence diagrams and PlantUML class diagrams. So, if they get test codes using the prompt based on the documents, they will be happy to save a lot of effort to write test codes.
生成されたプロンプト #2
a) Revised Prompt:
How can one generate test code in Python using materials like OpenAI Specifications, PlantUML sequence diagrams, and PlantUML class diagrams for enterprise software development, with a focus on utilizing the unittest module?b) Suggestions:
- Explain how the unittest module can be used to generate test code based on these materials.
- Describe the process of converting diagrams and specifications into Python test code using the unittest module.
- Provide examples of how test code can be generated from OpenAPI Specifications, PlantUML sequence diagrams, and PlantUML class diagrams for enterprise software development.
c) Questions:
- Would you like me to focus on any specific aspects of enterprise software development in the prompt?
- Are there any additional Python libraries or tools you would like to incorporate in the test code generation process?
回答
As my answer for your question 1, I would like you to focus on generate MECE (mutually exclusive and collectively exhaustive) test cases, and organized, structured and clean test codes. So you should consider not only normal test cases but also error cases including corner or edge cases. To realize that, you need to understand how the software is used from the documents and make some assumptions for test data which you need to prepare and use in the test cases.
As my answer for your question 2, there are no additional Python libraries or tools. But, to use test data in the test codes, you will use patch module of unittest.mock.
※ 生成されたプロンプト #3 〜 #13 は省略します。(長くなり、かつ英語であるため。) 本記事末尾の Appendix に掲載してありますので、必要に応じてご確認ください。
生成されたプロンプト #14
a) Revised Prompt:
How can one generate MECE (mutually exclusive and collectively exhaustive) test code in Python for various types of enterprise software using materials like OpenAPI Specifications (in JSON or YAML), PlantUML sequence diagrams (in text), and PlantUML class diagrams (in text), focusing on utilizing the unittest module and patch module of unittest.mock, while generating test cases and preparing test data step-by-step, and allowing the user to specify test targets based on the actors of the sequence diagrams? The test cases should be based on class diagrams first, then considering sequence diagrams, and finally taking into account the OpenAPI specifications when the test target is an API server. The test data should be immutable, defined in the setUp method of the test class that inherits unittest.TestCase, and additional test data can be specified in the test cases if the setUp test data is not sufficient. When using the patch module of unittest.mock for mocking, the @patch(‘…’) decorator should be used instead of the with patch(‘…’) as mock_foo: statement. Additionally, the spy test double pattern should be applied using the assert_called_once_with method of patch.object when applicable for the test case, and the with patch.object(…) as mock_foo: statement can be used in such cases. Each test case should follow the AAA (Arrange-Act-Assert) style. Be mindful of token limitations and avoid exceeding them when generating test code, and omit comments to save the number of characters.b) Suggestions:
- Explain how to create organized, structured, and clean test code considering normal, error, corner, and edge cases for different types of enterprise software, with a step-by-step approach, an option to specify test targets based on the actors of the sequence diagrams, a focus on class diagrams followed by sequence diagrams and OpenAPI specifications, the use of setUp method to define immutable test data in the test class that inherits unittest.TestCase, the possibility to add extra test data in the test cases if necessary, the use of @patch(‘…’) decorator for mocking, the application of the spy test double pattern using the assert_called_once_with method of patch.object when applicable, following the AAA (Arrange-Act-Assert) style for each test case, being mindful of token limitations when generating test code, and omitting comments to save the number of characters.
- Describe the process of understanding software usage from the documents and making assumptions for test data preparation in a general context, while preparing necessary test data step-by-step, focusing on the specified test targets, considering class diagrams, sequence diagrams, and OpenAPI specifications in the respective order, ensuring the test data is immutable by defining it in the setUp method of the test class, specifying additional test data in the test cases if required, using the @patch(‘…’) decorator for mocking, applying the spy test double pattern with the assert_called_once_with method of patch.object when applicable, adhering to the AAA (Arrange-Act-Assert) style for each test case, taking token limitations into account when generating test code, and refraining from using comments to save the number of characters.
- Provide a general approach for generating test code from OpenAPI Specifications (in JSON or YAML), PlantUML sequence diagrams (in text), and PlantUML class diagrams (in text), incorporating the unittest module and patch module of unittest.mock, that can be applied across various types of software, allows users to specify test targets based on the actors of the sequence diagrams, follows the order of considering class diagrams, sequence diagrams, and OpenAPI specifications, defines immutable test data in the setUp method of the test class that inherits unittest.TestCase, allows for additional test data to be specified in the test cases when necessary, employs the @patch(‘…’) decorator for mocking purposes, uses the spy test double pattern with the assert_called_once_with method of patch.object when appropriate, follows the AAA (Arrange-Act-Assert) style for each test case, remains within token limitations when generating test code, and avoids using comments to save the number of characters.
c) Questions:
- Are there any other specific requirements or adjustments you would like me to consider for the revised prompt and suggestions?
- Are you satisfied with the revised prompt and suggestions, or would you like me to make further adjustments?
回答
As my answer for your question 1, I would like you to organize and bulletize the revised prompt and suggestions. As my answer for your question 2, I am still not satisfied with the revised prompt and suggestions, so please make further adjustments.
生成されたプロンプト #15
a) Revised Prompt:
Generate MECE (mutually exclusive and collectively exhaustive) test code in Python for various types of enterprise software using materials like:
- OpenAPI Specifications (in JSON or YAML)
- PlantUML sequence diagrams (in text)
- PlantUML class diagrams (in text)
Focus on utilizing the unittest module and patch module of unittest.mock while:
- Generating test cases and preparing test data step-by-step
- Allowing the user to specify test targets based on the actors of the sequence diagrams
- Basing test cases on class diagrams first, then sequence diagrams, and finally OpenAPI specifications for API servers
- Defining immutable test data in the setUp method of the test class that inherits unittest.TestCase
- Specifying additional test data in test cases if needed
- Using the @patch(‘…’) decorator for mocking instead of with patch(‘…’) as mock_foo: statement
- Applying the spy test double pattern using the assert_called_once_with method of patch.object when applicable
- Using the with patch.object(…) as mock_foo: statement in spy test double pattern cases
- Following the AAA (Arrange-Act-Assert) style for each test case
- Being mindful of token limitations and avoiding exceeding them when generating test code
- Omitting comments to save the number of characters
b) Suggestions:
- Create organized, structured, and clean test code by:
- Following a step-by-step approach
- Considering normal, error, corner, and edge cases for different types of enterprise software
- Focusing on class diagrams, sequence diagrams, and OpenAPI specifications in that order
- Defining immutable test data in the setUp method of the test class that inherits unittest.TestCase
- Adding extra test data in the test cases if necessary
- Using the @patch(‘…’) decorator for mocking
- Applying the spy test double pattern using the assert_called_once_with method of patch.object when applicable
- Following the AAA (Arrange-Act-Assert) style for each test case
- Being mindful of token limitations when generating test code
- Omitting comments to save the number of characters
- Understand software usage from the documents and make assumptions for test data preparation by:
- Preparing necessary test data step-by-step
- Focusing on specified test targets
- Considering class diagrams, sequence diagrams, and OpenAPI specifications in the respective order
- Ensuring the test data is immutable by defining it in the setUp method of the test class
- Specifying additional test data in the test cases if required
- Using the @patch(‘…’) decorator for mocking
- Applying the spy test double pattern with the assert_called_once_with method of patch.object when applicable
- Adhering to the AAA (Arrange-Act-Assert) style for each test case
- Taking token limitations into account when generating test code
- Refraining from using comments to save the number of characters
- Provide a general approach for generating test code by:
- Incorporating the unittest module and patch module of unittest.mock
- Applying the approach across various types of software
- Allowing users to specify test targets based on the actors of the sequence diagrams
- Following the order of considering class diagrams, sequence diagrams, and OpenAPI specifications
- Defining immutable test data in the setUp method of the test class that inherits unittest.TestCase
- Allowing for additional test data to be specified in the test cases when necessary
- Employing the @patch(‘…’) decorator for mocking purposes
- Using the spy test double pattern with the assert_called_once_with method of patch.object when appropriate
- Following the AAA (Arrange-Act-Assert) style for each test case
- Remaining within token limitations when generating test code
- Avoiding using comments to save the number of characters
c) Questions:
- Are there any other specific requirements or adjustments you would like me to consider for the revised prompt and suggestions?
- Are you satisfied with the revised prompt and suggestions, or would you like me to make further adjustments?
プロンプトで伝えたいことは伝えたので、ここで終了としておきました。
I don’t think I need to answer those questions because we are done!
Great! I’m glad the revised prompt and suggestions meet your requirements. If you have any more questions or need further assistance in the future, please don’t hesitate to ask. Have a great day!
本記事では #14 のプロンプトを採用します。(#15 については Focus on
で始まる柱書と各項目との関係 (特に while
の範囲) が #14 から変わってしまっているように見受けられるため。)
参考までに、某機械翻訳による和訳を以下に掲載します。
OpenAPI仕様書(JSONまたはYAML)、PlantUMLシーケンス図(テキスト)、PlantUMLクラス図(テキスト)などの素材を用いて、様々な種類のエンタープライズソフトウェアに対してPythonでMECE(相互排他的かつ集合的に網羅する)テストコードを生成するには、unittest.mockのモジュールとパッチモジュールを活用することに焦点を当て、テストケース生成とテストデータの準備を段階的に行い、シーケンス図のアクターをもとにユーザーがテスト対象を指定できるようにするには?テストケースは、まずクラス図に基づき、次にシーケンス図を考慮し、最後にテスト対象がAPIサーバーの場合はOpenAPI仕様を考慮したものにする。テストデータは、unittest.TestCaseを継承したテストクラスのsetUpメソッドで定義された不変のものとし、setUpテストデータで足りない場合はテストケースで追加のテストデータを指定することが可能です。モッキングに unittest.mock の patch モジュールを使用する場合、with patch(‘…’) as mock_foo: 文の代わりに @patch(‘…’) デコレータを使用する必要があります。さらに、テストケースに適用できる場合は、patch.object の assert_called_once_with メソッドを使ってスパイテストダブルパターンを適用すべきであり、このような場合は with patch.object(…) as mock_foo: 文を使うことができる。各テストケースは、AAA(Arrange-Act-Assert)スタイルに従うべきです。テストコードを生成する際には、トークンの制限を意識して制限を超えないようにし、コメントを省略して文字数を節約してください。
また、#14 までで消費したトークンの情報は以下の通りです。
- Token breakdown:
- Message history: 4868
- User query: 0
- System Message: 19
- Few-shot examples: 1
- Max response: 1600
- Estimated max tokens: 6488
準備
前回と同様の OpenAPI 仕様、PlantUML シーケンス図、PlantUML クラス図を使います。
プロンプト
プロンプトは以下の通りです。今回は全文を本記事に掲載します。
1 | /* |
モデル
前回同様 text-davinci-003
を使用します。
入出力インターフェース
前回同様 Azure OpenAI Studio にある Playground を使用します。
実行手順
前回同様です。
結果
上記プロンプトを 5回実行し、それぞれの結果を取得しました。以下、各結果に関する、プロンプトにおける指示内容に対する形式チェック結果です。(なお、指示項目の洗い出しにあたっては生成したプロンプトの #15 を利用しました。)
# | 指示 | 結果1 | 結果2 | 結果3 | 結果4 | 結果5 |
---|---|---|---|---|---|---|
1 | MECE (mutually exclusive and collectively exhaustive) test code | ✓ | ✓ | ✓ | ✓ | ✓ |
2 | in Python | ✓ | ✓ | ✓ | ✓ | ✓ |
3 | utilizing the unittest module and patch module of unittest.mock | ✓ | ✓ | ✓ | ✓ | ✓ |
4 | Allowing the user to specify test targets based on the actors of the sequence diagrams | ✓ | ✓ | ✓ | ✓ | ✓ |
5 | Basing test cases on class diagrams first, | ✓ | - | ✓ | ✓ | ✓ |
6 | then sequence diagrams, | ✓ | ✓ | ✓ | ✓ | ✓ |
7 | and finally OpenAPI specifications for API servers | ✓ | ✓ | ✓ | ✓ | - |
8 | Defining immutable test data in the setUp method of the test class that inherits unittest.TestCase | ✓ | ✓ | ✓ | ✓ | ✓ |
9 | Specifying additional test data in test cases if needed | - | - | ✓ | - | - |
10 | Using the @patch('...') decorator for mocking instead of with patch('...') as mock_foo: statement |
✓ | - | - | - | - |
11 | Applying the spy test double pattern using the assert_called_once_with method of patch.object when applicable |
✓ | ✓ | ✓ | ✓ | ✓ |
12 | Using the with patch.object(...) as mock_foo: statement in spy test double pattern cases |
- | ✓ | ✓ | - | ✓ |
13 | Following the AAA (Arrange-Act-Assert) style for each test case | ✓ | ✓ | ✓ | ✓ | ✓ |
14 | Being mindful of token limitations and avoiding exceeding them when generating test code | ✓ | - | ✓ | ✓ | ✓ |
15 | Omitting comments to save the number of characters | - | - | - | - | ✓ |
各結果ともに多くの指示内容を満たしています。
指示項目ごとの重要度の高低を加味すると、結果1 が頭一つ抜きん出ています。
以下、個別の結果と、それに対する評価です。なお、上記の形式チェックで確認したポイントについては個別の評価においては言及しません。
結果1
1 | import unittest |
良い点
- 特に言及すべき点は無し
改善点
- テスト対象にテストダブルが使われてしまっている。
- テストデータ
self.order_data
が使われていない。
結果2
1 | import unittest |
Completions call failed:
This model’s maximum context length is 4097 tokens, however you requested 4112 tokens (4012 in your prompt; 100 for the completion). Please reduce your prompt; or completion length.
良い点
- シーケンス図に記述された SQL から
datetime
,created_at
,updated_at
を拾って補完した。 - トランザクションが細かくテストされている。
改善点
- テスト対象にテストダブルが使われてしまっている。
with
ステートメントのネストが深い
結果3
1 | import unittest |
良い点
limit
を想定して補完している。- トランザクションがテストされている。
改善点
- テスト対象にテストダブルが使われてしまっている。
- テストデータ
self.test_user
,self.test_order
が使われていない。
結果4
1 | import unittest |
良い点
- 特に言及すべき点は無し
改善点
- テスト対象にテストダブルが使われてしまっている。
- テストデータ
self.test_user
,self.test_order
が使用されていない。
結果5
1 | import unittest |
良い点
- トランザクションがテストされている。
- トランザクション失敗時に例外が上がることを想定している。
改善点
- テスト対象にテストダブルが使われてしまっている。
- 特に言及すべき点は無し
総評
前回よりもプロンプトに忠実な結果を安定的に得られるようになりました。
すべての結果に共通している問題は、テストダブルがテスト対象に対して使われてしまっている点です。
相手がプログラマならこのようなことは起こらないと思いますが、相手が AI だと起こってしまうということに、後から気づかされました。
これは、 AI がテストダブルの書き方は知っていても、その意義までは理解をしていないということだと思われます。(意義を言葉で説明することはできるかもしれませんが。)
あらかじめ PromptGenerator によるプロンプト生成時にインプットしておく必要がありました。
今後この問題が解決したらの話ですが、結果に多少の差異はあるものの、トークン数制限に到達してしまった結果2 を除き、概ねどの結果でもそのまま採用することができるのではないかと思います。
その中でも、結果1 はプロンプトに最も忠実となっており、個人的には結果1 をテストコードとして採用したいと思いました。
まとめ
ChatGPT にプロンプトを生成させる PromptGenerator メソッドを用いて、プロンプトのチューニングを行い、そのプロンプトを用いて前回同様のアプローチでテストコードを生成し、その内容を評価しました。
プロンプトのチューニング方法は様々ありますが、このメソッドによるプロンプトのチューニングを採用するだけで、アウトプットの品質が飛躍的に高まることが確認できました。
前回は「コードスニペット」としての用途にしか使えないと評しましたが、今回のアウトプットはほぼほぼテストコードの体をなしています。Zero-shot でここまでできることに驚きました。
ただし、テストダブルをテスト対象にまで使用しているのは問題です。これは PromptGenerator の段階で解決しないといけません。
インプットしたドキュメントはサンプルレベルでしたが、これを精緻化することで、より実践的なテストコードが生成できることが期待できます。
その際にはやはりトークン数制限が問題になるので、GPT-4 を使用する必要が出てくるでしょう。
また、より再現性を高めるために、プロンプトの生成とテストコードの生成の両方において Temperature
をより 0 に近づける試みも、後々必要になってくると思われます。
以上、Azure OpenAI Service を使ってドキュメント (OpenAPI 仕様, PlantUML のシーケンス図およびクラス図) からテストコードを生成し、その内容を評価する取り組みについて、少なくとも GPT-3 を利用したものに関しては、これにて一旦終了したいと思います。
Appendix: 生成されたプロンプト #2 〜 #13
生成されたプロンプト #3
a) Revised Prompt:
How can one generate MECE (mutually exclusive and collectively exhaustive) test code in Python using materials like OpenAI Specifications, PlantUML sequence diagrams, and PlantUML class diagrams for enterprise software development, focusing on utilizing the unittest module and patch module of unittest.mock?b) Suggestions:
- Explain how to create organized, structured, and clean test code considering normal, error, corner, and edge cases.
- Describe the process of understanding software usage from the documents and making assumptions for test data preparation.
- Provide examples of how test code can be generated from OpenAPI Specifications, PlantUML sequence diagrams, and PlantUML class diagrams, incorporating the unittest module and patch module of unittest.mock.
c) Questions:
- Are there any specific examples or scenarios you would like me to include in the prompt?
- Would you like me to focus on a specific type of enterprise software or a general approach that can be applied across various types of software?
回答
As my answer for your question 1, there are no examples or scenarios.
As my answer for your question 2, I would like you to focus on a general approach that can be applied across various types of software.
生成されたプロンプト #4
a) Revised Prompt:
How can one generate MECE (mutually exclusive and collectively exhaustive) test code in Python for various types of enterprise software using materials like OpenAI Specifications, PlantUML sequence diagrams, and PlantUML class diagrams, focusing on utilizing the unittest module and patch module of unittest.mock?b) Suggestions:
- Explain how to create organized, structured, and clean test code considering normal, error, corner, and edge cases for different types of enterprise software.
- Describe the process of understanding software usage from the documents and making assumptions for test data preparation in a general context.
- Provide a general approach for generating test code from OpenAPI Specifications, PlantUML sequence diagrams, and PlantUML class diagrams, incorporating the unittest module and patch module of unittest.mock, that can be applied across various types of software.
c) Questions:
- Is there any other aspect of test code generation you would like me to focus on?
- Are you satisfied with the revised prompt and suggestions, or would you like me to make further adjustments?
回答
As my answer for your question 1, I would like you to generate test codes step-by-step. You will think of what test case you should generate next for each test case. You will prepare necessary test data step-by-step as well.
As my answer for your question2, no, I still am not. In the prompt, I will give you OpenAPI Specifications in JSON or YAML, PlantUML sequence diagrams in text, and PlantUML class diagrams in text as well. I would like you to prepare fields for these input materials that you will receive.
生成されたプロンプト #5
a) Revised Prompt:
How can one generate MECE (mutually exclusive and collectively exhaustive) test code in Python for various types of enterprise software using materials like OpenAPI Specifications (in JSON or YAML), PlantUML sequence diagrams (in text), and PlantUML class diagrams (in text), focusing on utilizing the unittest module and patch module of unittest.mock, while generating test cases and preparing test data step-by-step?b) Suggestions:
- Explain how to create organized, structured, and clean test code considering normal, error, corner, and edge cases for different types of enterprise software, with a step-by-step approach.
- Describe the process of understanding software usage from the documents and making assumptions for test data preparation in a general context, while preparing necessary test data step-by-step.
- Provide a general approach for generating test code from OpenAPI Specifications (in JSON or YAML), PlantUML sequence diagrams (in text), and PlantUML class diagrams (in text), incorporating the unittest module and patch module of unittest.mock, that can be applied across various types of software.
c) Questions:
- Are there any other specific requirements for the input materials (OpenAPI Specifications, PlantUML sequence diagrams, and PlantUML class diagrams) that you would like me to consider?
- Are you satisfied with the revised prompt and suggestions, or would you like me to make further adjustments?
回答
As my answer for your question 1, I would like you to let me specify the test target, meaning what to test. It is usually chosen from actors of the sequence diagrams. As my answer for your question 2, I am still not satisfied with the revised prompt and suggestions. Please make further adjustments.
生成されたプロンプト #6
a) Revised Prompt:
How can one generate MECE (mutually exclusive and collectively exhaustive) test code in Python for various types of enterprise software using materials like OpenAPI Specifications (in JSON or YAML), PlantUML sequence diagrams (in text), and PlantUML class diagrams (in text), focusing on utilizing the unittest module and patch module of unittest.mock, while generating test cases and preparing test data step-by-step, and allowing the user to specify test targets based on the actors of the sequence diagrams?b) Suggestions:
- Explain how to create organized, structured, and clean test code considering normal, error, corner, and edge cases for different types of enterprise software, with a step-by-step approach and an option to specify test targets based on the actors of the sequence diagrams.
- Describe the process of understanding software usage from the documents and making assumptions for test data preparation in a general context, while preparing necessary test data step-by-step and focusing on the specified test targets.
- Provide a general approach for generating test code from OpenAPI Specifications (in JSON or YAML), PlantUML sequence diagrams (in text), and PlantUML class diagrams (in text), incorporating the unittest module and patch module of unittest.mock, that can be applied across various types of software and allows users to specify test targets based on the actors of the sequence diagrams.
c) Questions:
- Are there any other specific requirements or adjustments you would like me to consider for the revised prompt and suggestions?
- Are you satisfied with the revised prompt and suggestions, or would you like me to make further adjustments?
回答
As my answer for your question 1, I would like to emphasize that test cases should be based on the class diagrams first. And then, you consider the sequence diagrams. Do not forget considering the OpenAPI specifications when the test target is an API server. As my answer for your question 2, I am still not satisfied with the revised prompt and suggestions, so please make further adjustments.
生成されたプロンプト #7
a) Revised Prompt:
How can one generate MECE (mutually exclusive and collectively exhaustive) test code in Python for various types of enterprise software using materials like OpenAPI Specifications (in JSON or YAML), PlantUML sequence diagrams (in text), and PlantUML class diagrams (in text), focusing on utilizing the unittest module and patch module of unittest.mock, while generating test cases and preparing test data step-by-step, and allowing the user to specify test targets based on the actors of the sequence diagrams? The test cases should be based on class diagrams first, then considering sequence diagrams, and finally taking into account the OpenAPI specifications when the test target is an API server.b) Suggestions:
- Explain how to create organized, structured, and clean test code considering normal, error, corner, and edge cases for different types of enterprise software, with a step-by-step approach, an option to specify test targets based on the actors of the sequence diagrams, and a focus on class diagrams followed by sequence diagrams and OpenAPI specifications.
- Describe the process of understanding software usage from the documents and making assumptions for test data preparation in a general context, while preparing necessary test data step-by-step, focusing on the specified test targets, and considering class diagrams, sequence diagrams, and OpenAPI specifications in the respective order.
- Provide a general approach for generating test code from OpenAPI Specifications (in JSON or YAML), PlantUML sequence diagrams (in text), and PlantUML class diagrams (in text), incorporating the unittest module and patch module of unittest.mock, that can be applied across various types of software, allows users to specify test targets based on the actors of the sequence diagrams, and follows the order of considering class diagrams, sequence diagrams, and OpenAPI specifications.
c) Questions:
- Are there any other specific requirements or adjustments you would like me to consider for the revised prompt and suggestions?
- Are you satisfied with the revised prompt and suggestions, or would you like me to make further adjustments?
回答
As my answer for your question 1, I would like you to make the test data immutable. To achieve that, you can define them in setUp method of the test class which inherits unittest.TestCase. As my answer for your question 2, I am still not satisfied with the revised prompt and suggestions, so I expect you to make further adjustments.
生成されたプロンプト #8
a) Revised Prompt:
How can one generate MECE (mutually exclusive and collectively exhaustive) test code in Python for various types of enterprise software using materials like OpenAPI Specifications (in JSON or YAML), PlantUML sequence diagrams (in text), and PlantUML class diagrams (in text), focusing on utilizing the unittest module and patch module of unittest.mock, while generating test cases and preparing test data step-by-step, and allowing the user to specify test targets based on the actors of the sequence diagrams? The test cases should be based on class diagrams first, then considering sequence diagrams, and finally taking into account the OpenAPI specifications when the test target is an API server. The test data should be immutable, defined in the setUp method of the test class that inherits unittest.TestCase.b) Suggestions:
- Explain how to create organized, structured, and clean test code considering normal, error, corner, and edge cases for different types of enterprise software, with a step-by-step approach, an option to specify test targets based on the actors of the sequence diagrams, a focus on class diagrams followed by sequence diagrams and OpenAPI specifications, and the use of setUp method to define immutable test data in the test class that inherits unittest.TestCase.
- Describe the process of understanding software usage from the documents and making assumptions for test data preparation in a general context, while preparing necessary test data step-by-step, focusing on the specified test targets, considering class diagrams, sequence diagrams, and OpenAPI specifications in the respective order, and ensuring the test data is immutable by defining it in the setUp method of the test class.
- Provide a general approach for generating test code from OpenAPI Specifications (in JSON or YAML), PlantUML sequence diagrams (in text), and PlantUML class diagrams (in text), incorporating the unittest module and patch module of unittest.mock, that can be applied across various types of software, allows users to specify test targets based on the actors of the sequence diagrams, follows the order of considering class diagrams, sequence diagrams, and OpenAPI specifications, and defines immutable test data in the setUp method of the test class that inherits unittest.TestCase.
c) Questions:
- Are there any other specific requirements or adjustments you would like me to consider for the revised prompt and suggestions?
- Are you satisfied with the revised prompt and suggestions, or would you like me to make further adjustments?
回答
As my answer for your question 1, if the test data which you defined in the setUp method of the test class do not make sense for additional test cases you considered based on the documents, you can specify additional test data in the test cases. As my answer for your question 2, I am still not satisfied with the revised prompt and suggestions, so I would like you to make further adjustments.
生成されたプロンプト #9
a) Revised Prompt:
How can one generate MECE (mutually exclusive and collectively exhaustive) test code in Python for various types of enterprise software using materials like OpenAPI Specifications (in JSON or YAML), PlantUML sequence diagrams (in text), and PlantUML class diagrams (in text), focusing on utilizing the unittest module and patch module of unittest.mock, while generating test cases and preparing test data step-by-step, and allowing the user to specify test targets based on the actors of the sequence diagrams? The test cases should be based on class diagrams first, then considering sequence diagrams, and finally taking into account the OpenAPI specifications when the test target is an API server. The test data should be immutable, defined in the setUp method of the test class that inherits unittest.TestCase, and additional test data can be specified in the test cases if the setUp test data is not sufficient.b) Suggestions:
- Explain how to create organized, structured, and clean test code considering normal, error, corner, and edge cases for different types of enterprise software, with a step-by-step approach, an option to specify test targets based on the actors of the sequence diagrams, a focus on class diagrams followed by sequence diagrams and OpenAPI specifications, and the use of setUp method to define immutable test data in the test class that inherits unittest.TestCase, with the possibility to add extra test data in the test cases if necessary.
- Describe the process of understanding software usage from the documents and making assumptions for test data preparation in a general context, while preparing necessary test data step-by-step, focusing on the specified test targets, considering class diagrams, sequence diagrams, and OpenAPI specifications in the respective order, ensuring the test data is immutable by defining it in the setUp method of the test class, and specifying additional test data in the test cases if required.
- Provide a general approach for generating test code from OpenAPI Specifications (in JSON or YAML), PlantUML sequence diagrams (in text), and PlantUML class diagrams (in text), incorporating the unittest module and patch module of unittest.mock, that can be applied across various types of software, allows users to specify test targets based on the actors of the sequence diagrams, follows the order of considering class diagrams, sequence diagrams, and OpenAPI specifications, defines immutable test data in the setUp method of the test class that inherits unittest.TestCase, and allows for additional test data to be specified in the test cases when necessary.
c) Questions:
- Are there any other specific requirements or adjustments you would like me to consider for the revised prompt and suggestions?
- Are you satisfied with the revised prompt and suggestions, or would you like me to make further adjustments?
回答
As my answer for your question 1, when you use patch module of unittest.mock for mocking, I would like you to use decorator like @patch('...')
rather thatn with statement like with patch('...') as mock_foo:
. As my answer for your question 2, I am still not satisfied with the revised prompt and suggestions, so please make further adjustments.
生成されたプロンプト #10
a) Revised Prompt:
How can one generate MECE (mutually exclusive and collectively exhaustive) test code in Python for various types of enterprise software using materials like OpenAPI Specifications (in JSON or YAML), PlantUML sequence diagrams (in text), and PlantUML class diagrams (in text), focusing on utilizing the unittest module and patch module of unittest.mock, while generating test cases and preparing test data step-by-step, and allowing the user to specify test targets based on the actors of the sequence diagrams? The test cases should be based on class diagrams first, then considering sequence diagrams, and finally taking into account the OpenAPI specifications when the test target is an API server. The test data should be immutable, defined in the setUp method of the test class that inherits unittest.TestCase, and additional test data can be specified in the test cases if the setUp test data is not sufficient. When using the patch module of unittest.mock for mocking, the @patch(‘…’) decorator should be used instead of the with patch(‘…’) as mock_foo: statement.b) Suggestions:
- Explain how to create organized, structured, and clean test code considering normal, error, corner, and edge cases for different types of enterprise software, with a step-by-step approach, an option to specify test targets based on the actors of the sequence diagrams, a focus on class diagrams followed by sequence diagrams and OpenAPI specifications, the use of setUp method to define immutable test data in the test class that inherits unittest.TestCase, the possibility to add extra test data in the test cases if necessary, and the use of @patch(‘…’) decorator for mocking.
- Describe the process of understanding software usage from the documents and making assumptions for test data preparation in a general context, while preparing necessary test data step-by-step, focusing on the specified test targets, considering class diagrams, sequence diagrams, and OpenAPI specifications in the respective order, ensuring the test data is immutable by defining it in the setUp method of the test class, specifying additional test data in the test cases if required, and using the @patch(‘…’) decorator for mocking.
- Provide a general approach for generating test code from OpenAPI Specifications (in JSON or YAML), PlantUML sequence diagrams (in text), and PlantUML class diagrams (in text), incorporating the unittest module and patch module of unittest.mock, that can be applied across various types of software, allows users to specify test targets based on the actors of the sequence diagrams, follows the order of considering class diagrams, sequence diagrams, and OpenAPI specifications, defines immutable test data in the setUp method of the test class that inherits unittest.TestCase, allows for additional test data to be specified in the test cases when necessary, and employs the @patch(‘…’) decorator for mocking purposes.
c) Questions:
- Are there any other specific requirements or adjustments you would like me to consider for the revised prompt and suggestions?
- Are you satisfied with the revised prompt and suggestions, or would you like me to make further adjustments?
回答
As my answer for your question 1, I would like you to apply spy test double pattern using assert_called_once_with
method of patch.object
when the pattern is applicable for the test case. If you use the spy pattern, you can use patch.object
under with statement like with patch.object(...) as mock_foo:
. As my answer for your question 2, I am still not satisfied with the revised prompt and suggestions, so I would like you to make further adjustments.
生成されたプロンプト #11
a) Revised Prompt:
How can one generate MECE (mutually exclusive and collectively exhaustive) test code in Python for various types of enterprise software using materials like OpenAPI Specifications (in JSON or YAML), PlantUML sequence diagrams (in text), and PlantUML class diagrams (in text), focusing on utilizing the unittest module and patch module of unittest.mock, while generating test cases and preparing test data step-by-step, and allowing the user to specify test targets based on the actors of the sequence diagrams? The test cases should be based on class diagrams first, then considering sequence diagrams, and finally taking into account the OpenAPI specifications when the test target is an API server. The test data should be immutable, defined in the setUp method of the test class that inherits unittest.TestCase, and additional test data can be specified in the test cases if the setUp test data is not sufficient. When using the patch module of unittest.mock for mocking, the @patch(‘…’) decorator should be used instead of the with patch(‘…’) as mock_foo: statement. Additionally, the spy test double pattern should be applied using the assert_called_once_with method of patch.object when applicable for the test case, and the with patch.object(…) as mock_foo: statement can be used in such cases.b) Suggestions:
- Explain how to create organized, structured, and clean test code considering normal, error, corner, and edge cases for different types of enterprise software, with a step-by-step approach, an option to specify test targets based on the actors of the sequence diagrams, a focus on class diagrams followed by sequence diagrams and OpenAPI specifications, the use of setUp method to define immutable test data in the test class that inherits unittest.TestCase, the possibility to add extra test data in the test cases if necessary, the use of @patch(‘…’) decorator for mocking, and the application of the spy test double pattern using the assert_called_once_with method of patch.object when applicable.
- Describe the process of understanding software usage from the documents and making assumptions for test data preparation in a general context, while preparing necessary test data step-by-step, focusing on the specified test targets, considering class diagrams, sequence diagrams, and OpenAPI specifications in the respective order, ensuring the test data is immutable by defining it in the setUp method of the test class, specifying additional test data in the test cases if required, using the @patch(‘…’) decorator for mocking, and applying the spy test double pattern with the assert_called_once_with method of patch.object when applicable.
- Provide a general approach for generating test code from OpenAPI Specifications (in JSON or YAML), PlantUML sequence diagrams (in text), and PlantUML class diagrams (in text), incorporating the unittest module and patch module of unittest.mock, that can be applied across various types of software, allows users to specify test targets based on the actors of the sequence diagrams, follows the order of considering class diagrams, sequence diagrams, and OpenAPI specifications, defines immutable test data in the setUp method of the test class that inherits unittest.TestCase, allows for additional test data to be specified in the test cases when necessary, employs the @patch(‘…’) decorator for mocking purposes, and uses the spy test double pattern with the assert_called_once_with method of patch.object when appropriate.
c) Questions:
- Are there any other specific requirements or adjustments you would like me to consider for the revised prompt and suggestions?
- Are you satisfied with the revised prompt and suggestions, or would you like me to make further adjustments?
回答
As my answer for your question 1, I would like you to write each test case in AAA (Arrange-Act-Assert) style. As my answer for your question 2, I am still not satisfied with the revised prompt and suggestions, so please make further adjustments.
生成されたプロンプト #12
a) Revised Prompt:
How can one generate MECE (mutually exclusive and collectively exhaustive) test code in Python for various types of enterprise software using materials like OpenAPI Specifications (in JSON or YAML), PlantUML sequence diagrams (in text), and PlantUML class diagrams (in text), focusing on utilizing the unittest module and patch module of unittest.mock, while generating test cases and preparing test data step-by-step, and allowing the user to specify test targets based on the actors of the sequence diagrams? The test cases should be based on class diagrams first, then considering sequence diagrams, and finally taking into account the OpenAPI specifications when the test target is an API server. The test data should be immutable, defined in the setUp method of the test class that inherits unittest.TestCase, and additional test data can be specified in the test cases if the setUp test data is not sufficient. When using the patch module of unittest.mock for mocking, the @patch(‘…’) decorator should be used instead of the with patch(‘…’) as mock_foo: statement. Additionally, the spy test double pattern should be applied using the assert_called_once_with method of patch.object when applicable for the test case, and the with patch.object(…) as mock_foo: statement can be used in such cases. Each test case should follow the AAA (Arrange-Act-Assert) style.b) Suggestions:
- Explain how to create organized, structured, and clean test code considering normal, error, corner, and edge cases for different types of enterprise software, with a step-by-step approach, an option to specify test targets based on the actors of the sequence diagrams, a focus on class diagrams followed by sequence diagrams and OpenAPI specifications, the use of setUp method to define immutable test data in the test class that inherits unittest.TestCase, the possibility to add extra test data in the test cases if necessary, the use of @patch(‘…’) decorator for mocking, the application of the spy test double pattern using the assert_called_once_with method of patch.object when applicable, and following the AAA (Arrange-Act-Assert) style for each test case.
- Describe the process of understanding software usage from the documents and making assumptions for test data preparation in a general context, while preparing necessary test data step-by-step, focusing on the specified test targets, considering class diagrams, sequence diagrams, and OpenAPI specifications in the respective order, ensuring the test data is immutable by defining it in the setUp method of the test class, specifying additional test data in the test cases if required, using the @patch(‘…’) decorator for mocking, applying the spy test double pattern with the assert_called_once_with method of patch.object when applicable, and adhering to the AAA (Arrange-Act-Assert) style for each test case.
- Provide a general approach for generating test code from OpenAPI Specifications (in JSON or YAML), PlantUML sequence diagrams (in text), and PlantUML class diagrams (in text), incorporating the unittest module and patch module of unittest.mock, that can be applied across various types of software, allows users to specify test targets based on the actors of the sequence diagrams, follows the order of considering class diagrams, sequence diagrams, and OpenAPI specifications, defines immutable test data in the setUp method of the test class that inherits unittest.TestCase, allows for additional test data to be specified in the test cases when necessary, employs the @patch(‘…’) decorator for mocking purposes, uses the spy test double pattern with the assert_called_once_with method of patch.object when appropriate, and follows the AAA (Arrange-Act-Assert) style for each test case.
c) Questions:
- Are there any other specific requirements or adjustments you would like me to consider for the revised prompt and suggestions?
- Are you satisfied with the revised prompt and suggestions, or would you like me to make further adjustments?
回答
As my answer for your question 1, if you are aware of current token count, be careful not to exceed the limitation of tokens especially if you are a model of GPT-3. The token length limit is 4096 for GPT-3, 8192 for GPT-4, and 32768 for GPT-4-32k. Not to exceed the limitation, you do not need to output comments. If you are about to exceed the limitation, you can stop outputting next test case and finish the test class. As my answer for your question 2, I am still not satisfied with the revised prompt and suggestions, so I would like you to make further adjustments.
生成されたプロンプト #13
a) Revised Prompt:
How can one generate MECE (mutually exclusive and collectively exhaustive) test code in Python for various types of enterprise software using materials like OpenAPI Specifications (in JSON or YAML), PlantUML sequence diagrams (in text), and PlantUML class diagrams (in text), focusing on utilizing the unittest module and patch module of unittest.mock, while generating test cases and preparing test data step-by-step, and allowing the user to specify test targets based on the actors of the sequence diagrams? The test cases should be based on class diagrams first, then considering sequence diagrams, and finally taking into account the OpenAPI specifications when the test target is an API server. The test data should be immutable, defined in the setUp method of the test class that inherits unittest.TestCase, and additional test data can be specified in the test cases if the setUp test data is not sufficient. When using the patch module of unittest.mock for mocking, the @patch(‘…’) decorator should be used instead of the with patch(‘…’) as mock_foo: statement. Additionally, the spy test double pattern should be applied using the assert_called_once_with method of patch.object when applicable for the test case, and the with patch.object(…) as mock_foo: statement can be used in such cases. Each test case should follow the AAA (Arrange-Act-Assert) style. Be mindful of token limitations and avoid exceeding them when generating test code.b) Suggestions:
- Explain how to create organized, structured, and clean test code considering normal, error, corner, and edge cases for different types of enterprise software, with a step-by-step approach, an option to specify test targets based on the actors of the sequence diagrams, a focus on class diagrams followed by sequence diagrams and OpenAPI specifications, the use of setUp method to define immutable test data in the test class that inherits unittest.TestCase, the possibility to add extra test data in the test cases if necessary, the use of @patch(‘…’) decorator for mocking, the application of the spy test double pattern using the assert_called_once_with method of patch.object when applicable, following the AAA (Arrange-Act-Assert) style for each test case, and being mindful of token limitations when generating test code.
- Describe the process of understanding software usage from the documents and making assumptions for test data preparation in a general context, while preparing necessary test data step-by-step, focusing on the specified test targets, considering class diagrams, sequence diagrams, and OpenAPI specifications in the respective order, ensuring the test data is immutable by defining it in the setUp method of the test class, specifying additional test data in the test cases if required, using the @patch(‘…’) decorator for mocking, applying the spy test double pattern with the assert_called_once_with method of patch.object when applicable, adhering to the AAA (Arrange-Act-Assert) style for each test case, and taking token limitations into account when generating test code.
- Provide a general approach for generating test code from OpenAPI Specifications (in JSON or YAML), PlantUML sequence diagrams (in text), and PlantUML class diagrams (in text), incorporating the unittest module and patch module of unittest.mock, that can be applied across various types of software, allows users to specify test targets based on the actors of the sequence diagrams, follows the order of considering class diagrams, sequence diagrams, and OpenAPI specifications, defines immutable test data in the setUp method of the test class that inherits unittest.TestCase, allows for additional test data to be specified in the test cases when necessary, employs the @patch(‘…’) decorator for mocking purposes, uses the spy test double pattern with the assert_called_once_with method of patch.object when appropriate, follows the AAA (Arrange-Act-Assert) style for each test case, and remains within token limitations when generating test code.
c) Questions:
- Are there any other specific requirements or adjustments you would like me to consider for the revised prompt and suggestions?
- Are you satisfied with the revised prompt and suggestions, or would you like me to make further adjustments?
回答
As my answer for your question 1, you do not have to write comments to save the number of characters and not to exceed the token limitation. As my answer for your question 2, I am still not satisfied with the revised prompt and suggestions, so please make further adjustments.