kt.log

Test code generation from Swagger and PlantUML with Azure OpenAI Service using PromptGenerator method

PromptGenerator メソッドによって最適化されたプロンプトを使って Azure OpenAI Service で OpenAPI と PlantUML からテストコードを生成する

前回、Azure OpenAI Service を使って OpenAPI と PlantUML からテストコードを生成する取り組みについて説明しました。
ここで出来ていなかったことの一つとして、プロンプトのチューニングがあります。
今回、ChatGPT にプロンプトを生成させる PromptGenerator というメソッドを用いて最適化されたプロンプトを生成し、そのプロンプトを用いて前回同様のアプローチでテストコードを生成して、その内容を評価したいと思います。

ゴール

  • PromptGenerator メソッドを用いて、前回のプロンプトに代わる最適化されたプロンプトを生成する。
  • 前項のプロンプトを使ってドキュメントからテストコードを生成し、その内容を評価する。

免責

  • 本記事の内容は、執筆時点のものです。LLM の変化やゆらぎもあるため、再現性は保証されません。
  • 本記事の内容は検証レベルのものです。完全な手法に関する情報を提供するものではありません。
  • 本記事で使用する PlantUML は、細部まで作り込んでいるわけではありません。細かい部分で間違いがある可能性があります。

PromptGenerator について

プロンプトによるプロンプトの生成

ChatGPT (GPT-4) で以下のやりとりを行い、プロンプトを生成しました。
プロンプトの生成回数は 15 回で、その中には前回挙げた改善点に関する内容も含まれています。

初回プロンプト

1
2
3
4
5
6
7
8
9
10
11
12
13
I want you to be my prompt engineer. Your goal is to help me craft the best prompt for my needs. That prompt will be used by you in ChatGPT.

Follow the process below.

1. Your first response is to ask me what the prompt should be about.
I'll provide my answer, but it should go through the next steps and improve through continuous iterations.

2. Generate two sections based on my input.
a) Revised Prompt (Provide a rewritten prompt. It should be clear, concise, and easy for you to understand.)
b) suggestions (suggesting details to include in the prompt to improve it);
c) Questions (ask any relevant questions about any additional information you need from me to improve the prompt)

3. This iterative process continues until I provide you with additional information, you update the prompts in the Revised Prompts section, and I am done.

What should the prompt be about?

回答

It’s about test code generation in Python from materials such as OpenAI Specifications, PlantUML sequence diagrams, PlantUML class diagrams, and so on.

生成されたプロンプト #1

a) Revised Prompt:
How can one generate test code in Python using materials like OpenAI Specifications, PlantUML sequence diagrams, and PlantUML class diagrams?

b) Suggestions:

  • Briefly explain the purpose of using these materials for test code generation.
  • Describe the process of converting diagrams and specifications into Python test code.
  • Mention any libraries or tools that can be used to facilitate this process.

c) Questions:

  1. Are there any specific tools or libraries you would like to know more about for this purpose?
  2. What is your level of expertise in Python and these materials?
  3. Do you have any specific use case or project in mind for this prompt?

回答

As my answer for your question 1, we should use unittest module in the test code.
As my answer for your question 2, I am an expert of Python programmer. I have experienced Python for more than a decade.
As my answer for your question 3, the use case of this prompt is enterprise software development. They need to create not only software based on test codes but also documents that explain the software. They will write test codes after writing the documents such as OpenAPI Specifications, PlantUML sequence diagrams and PlantUML class diagrams. So, if they get test codes using the prompt based on the documents, they will be happy to save a lot of effort to write test codes.

生成されたプロンプト #2

a) Revised Prompt:
How can one generate test code in Python using materials like OpenAI Specifications, PlantUML sequence diagrams, and PlantUML class diagrams for enterprise software development, with a focus on utilizing the unittest module?

b) Suggestions:

  • Explain how the unittest module can be used to generate test code based on these materials.
  • Describe the process of converting diagrams and specifications into Python test code using the unittest module.
  • Provide examples of how test code can be generated from OpenAPI Specifications, PlantUML sequence diagrams, and PlantUML class diagrams for enterprise software development.

c) Questions:

  1. Would you like me to focus on any specific aspects of enterprise software development in the prompt?
  2. Are there any additional Python libraries or tools you would like to incorporate in the test code generation process?

回答

As my answer for your question 1, I would like you to focus on generate MECE (mutually exclusive and collectively exhaustive) test cases, and organized, structured and clean test codes. So you should consider not only normal test cases but also error cases including corner or edge cases. To realize that, you need to understand how the software is used from the documents and make some assumptions for test data which you need to prepare and use in the test cases.
As my answer for your question 2, there are no additional Python libraries or tools. But, to use test data in the test codes, you will use patch module of unittest.mock.


※ 生成されたプロンプト #3 〜 #13 は省略します。(長くなり、かつ英語であるため。) 本記事末尾の Appendix に掲載してありますので、必要に応じてご確認ください。


生成されたプロンプト #14

a) Revised Prompt:
How can one generate MECE (mutually exclusive and collectively exhaustive) test code in Python for various types of enterprise software using materials like OpenAPI Specifications (in JSON or YAML), PlantUML sequence diagrams (in text), and PlantUML class diagrams (in text), focusing on utilizing the unittest module and patch module of unittest.mock, while generating test cases and preparing test data step-by-step, and allowing the user to specify test targets based on the actors of the sequence diagrams? The test cases should be based on class diagrams first, then considering sequence diagrams, and finally taking into account the OpenAPI specifications when the test target is an API server. The test data should be immutable, defined in the setUp method of the test class that inherits unittest.TestCase, and additional test data can be specified in the test cases if the setUp test data is not sufficient. When using the patch module of unittest.mock for mocking, the @patch(‘…’) decorator should be used instead of the with patch(‘…’) as mock_foo: statement. Additionally, the spy test double pattern should be applied using the assert_called_once_with method of patch.object when applicable for the test case, and the with patch.object(…) as mock_foo: statement can be used in such cases. Each test case should follow the AAA (Arrange-Act-Assert) style. Be mindful of token limitations and avoid exceeding them when generating test code, and omit comments to save the number of characters.

b) Suggestions:

  • Explain how to create organized, structured, and clean test code considering normal, error, corner, and edge cases for different types of enterprise software, with a step-by-step approach, an option to specify test targets based on the actors of the sequence diagrams, a focus on class diagrams followed by sequence diagrams and OpenAPI specifications, the use of setUp method to define immutable test data in the test class that inherits unittest.TestCase, the possibility to add extra test data in the test cases if necessary, the use of @patch(‘…’) decorator for mocking, the application of the spy test double pattern using the assert_called_once_with method of patch.object when applicable, following the AAA (Arrange-Act-Assert) style for each test case, being mindful of token limitations when generating test code, and omitting comments to save the number of characters.
  • Describe the process of understanding software usage from the documents and making assumptions for test data preparation in a general context, while preparing necessary test data step-by-step, focusing on the specified test targets, considering class diagrams, sequence diagrams, and OpenAPI specifications in the respective order, ensuring the test data is immutable by defining it in the setUp method of the test class, specifying additional test data in the test cases if required, using the @patch(‘…’) decorator for mocking, applying the spy test double pattern with the assert_called_once_with method of patch.object when applicable, adhering to the AAA (Arrange-Act-Assert) style for each test case, taking token limitations into account when generating test code, and refraining from using comments to save the number of characters.
  • Provide a general approach for generating test code from OpenAPI Specifications (in JSON or YAML), PlantUML sequence diagrams (in text), and PlantUML class diagrams (in text), incorporating the unittest module and patch module of unittest.mock, that can be applied across various types of software, allows users to specify test targets based on the actors of the sequence diagrams, follows the order of considering class diagrams, sequence diagrams, and OpenAPI specifications, defines immutable test data in the setUp method of the test class that inherits unittest.TestCase, allows for additional test data to be specified in the test cases when necessary, employs the @patch(‘…’) decorator for mocking purposes, uses the spy test double pattern with the assert_called_once_with method of patch.object when appropriate, follows the AAA (Arrange-Act-Assert) style for each test case, remains within token limitations when generating test code, and avoids using comments to save the number of characters.

c) Questions:

  1. Are there any other specific requirements or adjustments you would like me to consider for the revised prompt and suggestions?
  2. Are you satisfied with the revised prompt and suggestions, or would you like me to make further adjustments?

回答

As my answer for your question 1, I would like you to organize and bulletize the revised prompt and suggestions. As my answer for your question 2, I am still not satisfied with the revised prompt and suggestions, so please make further adjustments.

生成されたプロンプト #15

a) Revised Prompt:
Generate MECE (mutually exclusive and collectively exhaustive) test code in Python for various types of enterprise software using materials like:

  • OpenAPI Specifications (in JSON or YAML)
  • PlantUML sequence diagrams (in text)
  • PlantUML class diagrams (in text)

Focus on utilizing the unittest module and patch module of unittest.mock while:

  • Generating test cases and preparing test data step-by-step
  • Allowing the user to specify test targets based on the actors of the sequence diagrams
  • Basing test cases on class diagrams first, then sequence diagrams, and finally OpenAPI specifications for API servers
  • Defining immutable test data in the setUp method of the test class that inherits unittest.TestCase
  • Specifying additional test data in test cases if needed
  • Using the @patch(‘…’) decorator for mocking instead of with patch(‘…’) as mock_foo: statement
  • Applying the spy test double pattern using the assert_called_once_with method of patch.object when applicable
  • Using the with patch.object(…) as mock_foo: statement in spy test double pattern cases
  • Following the AAA (Arrange-Act-Assert) style for each test case
  • Being mindful of token limitations and avoiding exceeding them when generating test code
  • Omitting comments to save the number of characters

b) Suggestions:

  • Create organized, structured, and clean test code by:
    • Following a step-by-step approach
    • Considering normal, error, corner, and edge cases for different types of enterprise software
    • Focusing on class diagrams, sequence diagrams, and OpenAPI specifications in that order
    • Defining immutable test data in the setUp method of the test class that inherits unittest.TestCase
    • Adding extra test data in the test cases if necessary
    • Using the @patch(‘…’) decorator for mocking
    • Applying the spy test double pattern using the assert_called_once_with method of patch.object when applicable
    • Following the AAA (Arrange-Act-Assert) style for each test case
    • Being mindful of token limitations when generating test code
    • Omitting comments to save the number of characters
  • Understand software usage from the documents and make assumptions for test data preparation by:
    • Preparing necessary test data step-by-step
    • Focusing on specified test targets
    • Considering class diagrams, sequence diagrams, and OpenAPI specifications in the respective order
    • Ensuring the test data is immutable by defining it in the setUp method of the test class
    • Specifying additional test data in the test cases if required
    • Using the @patch(‘…’) decorator for mocking
    • Applying the spy test double pattern with the assert_called_once_with method of patch.object when applicable
    • Adhering to the AAA (Arrange-Act-Assert) style for each test case
    • Taking token limitations into account when generating test code
    • Refraining from using comments to save the number of characters
  • Provide a general approach for generating test code by:
    • Incorporating the unittest module and patch module of unittest.mock
    • Applying the approach across various types of software
    • Allowing users to specify test targets based on the actors of the sequence diagrams
    • Following the order of considering class diagrams, sequence diagrams, and OpenAPI specifications
    • Defining immutable test data in the setUp method of the test class that inherits unittest.TestCase
    • Allowing for additional test data to be specified in the test cases when necessary
    • Employing the @patch(‘…’) decorator for mocking purposes
    • Using the spy test double pattern with the assert_called_once_with method of patch.object when appropriate
    • Following the AAA (Arrange-Act-Assert) style for each test case
    • Remaining within token limitations when generating test code
    • Avoiding using comments to save the number of characters

c) Questions:

  1. Are there any other specific requirements or adjustments you would like me to consider for the revised prompt and suggestions?
  2. Are you satisfied with the revised prompt and suggestions, or would you like me to make further adjustments?

プロンプトで伝えたいことは伝えたので、ここで終了としておきました。

I don’t think I need to answer those questions because we are done!

Great! I’m glad the revised prompt and suggestions meet your requirements. If you have any more questions or need further assistance in the future, please don’t hesitate to ask. Have a great day!

本記事では #14 のプロンプトを採用します。(#15 については Focus on で始まる柱書と各項目との関係 (特に while の範囲) が #14 から変わってしまっているように見受けられるため。)

参考までに、某機械翻訳による和訳を以下に掲載します。

OpenAPI仕様書(JSONまたはYAML)、PlantUMLシーケンス図(テキスト)、PlantUMLクラス図(テキスト)などの素材を用いて、様々な種類のエンタープライズソフトウェアに対してPythonでMECE(相互排他的かつ集合的に網羅する)テストコードを生成するには、unittest.mockのモジュールとパッチモジュールを活用することに焦点を当て、テストケース生成とテストデータの準備を段階的に行い、シーケンス図のアクターをもとにユーザーがテスト対象を指定できるようにするには?テストケースは、まずクラス図に基づき、次にシーケンス図を考慮し、最後にテスト対象がAPIサーバーの場合はOpenAPI仕様を考慮したものにする。テストデータは、unittest.TestCaseを継承したテストクラスのsetUpメソッドで定義された不変のものとし、setUpテストデータで足りない場合はテストケースで追加のテストデータを指定することが可能です。モッキングに unittest.mock の patch モジュールを使用する場合、with patch(‘…’) as mock_foo: 文の代わりに @patch(‘…’) デコレータを使用する必要があります。さらに、テストケースに適用できる場合は、patch.object の assert_called_once_with メソッドを使ってスパイテストダブルパターンを適用すべきであり、このような場合は with patch.object(…) as mock_foo: 文を使うことができる。各テストケースは、AAA(Arrange-Act-Assert)スタイルに従うべきです。テストコードを生成する際には、トークンの制限を意識して制限を超えないようにし、コメントを省略して文字数を節約してください。

また、#14 までで消費したトークンの情報は以下の通りです。

  • Token breakdown:
    • Message history: 4868
    • User query: 0
    • System Message: 19
    • Few-shot examples: 1
    • Max response: 1600
    • Estimated max tokens: 6488

準備

前回と同様の OpenAPI 仕様、PlantUML シーケンス図、PlantUML クラス図を使います。

プロンプト

プロンプトは以下の通りです。今回は全文を本記事に掲載します。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
/*
How can one generate MECE (mutually exclusive and collectively exhaustive) test code in Python for various types of enterprise software using materials like OpenAPI Specifications (in JSON or YAML), PlantUML sequence diagrams (in text), and PlantUML class diagrams (in text), focusing on utilizing the unittest module and patch module of unittest.mock, while generating test cases and preparing test data step-by-step, and allowing the user to specify test targets based on the actors of the sequence diagrams? The test cases should be based on class diagrams first, then considering sequence diagrams, and finally taking into account the OpenAPI specifications when the test target is an API server. The test data should be immutable, defined in the setUp method of the test class that inherits unittest.TestCase, and additional test data can be specified in the test cases if the setUp test data is not sufficient. When using the patch module of unittest.mock for mocking, the @patch('...') decorator should be used instead of the with patch('...') as mock_foo: statement. Additionally, the spy test double pattern should be applied using the assert_called_once_with method of patch.object when applicable for the test case, and the with patch.object(...) as mock_foo: statement can be used in such cases. Each test case should follow the AAA (Arrange-Act-Assert) style. Be mindful of token limitations and avoid exceeding them when generating test code, and omit comments to save the number of characters.

* Test target: ペットストアAPI

* OpenAPI Specifications:
"""
{
"swagger": "2.0",
"info": {
"version": "1.0.0",
"title": "Swagger Petstore",
"description": "A sample API that uses a petstore as an example to demonstrate features in the swagger-2.0 specification",
"termsOfService": "http://swagger.io/terms/",
"contact": {
"name": "Swagger API Team"
},
"license": {
"name": "MIT"
}
},
"host": "petstore.swagger.io",
"basePath": "/api",
"schemes": [
"http"
],
"consumes": [
"application/json"
],
"produces": [
"application/json"
],
"paths": {
"/pets": {
"get": {
"description": "Returns all pets from the system that the user has access to",
"operationId": "findPets",
"produces": [
"application/json",
"application/xml",
"text/xml",
"text/html"
],
"parameters": [
{
"name": "tags",
"in": "query",
"description": "tags to filter by",
"required": false,
"type": "array",
"items": {
"type": "string"
},
"collectionFormat": "csv"
},
{
"name": "limit",
"in": "query",
"description": "maximum number of results to return",
"required": false,
"type": "integer",
"format": "int32"
}
],
"responses": {
"200": {
"description": "pet response",
"schema": {
"type": "array",
"items": {
"$ref": "#/definitions/Pet"
}
}
},
"default": {
"description": "unexpected error",
"schema": {
"$ref": "#/definitions/ErrorModel"
}
}
}
},
"post": {
"description": "Creates a new pet in the store. Duplicates are allowed",
"operationId": "addPet",
"produces": [
"application/json"
],
"parameters": [
{
"name": "pet",
"in": "body",
"description": "Pet to add to the store",
"required": true,
"schema": {
"$ref": "#/definitions/NewPet"
}
}
],
"responses": {
"200": {
"description": "pet response",
"schema": {
"$ref": "#/definitions/Pet"
}
},
"default": {
"description": "unexpected error",
"schema": {
"$ref": "#/definitions/ErrorModel"
}
}
}
}
},
"/pets/{id}": {
"get": {
"description": "Returns a user based on a single ID, if the user does not have access to the pet",
"operationId": "findPetById",
"produces": [
"application/json",
"application/xml",
"text/xml",
"text/html"
],
"parameters": [
{
"name": "id",
"in": "path",
"description": "ID of pet to fetch",
"required": true,
"type": "integer",
"format": "int64"
}
],
"responses": {
"200": {
"description": "pet response",
"schema": {
"$ref": "#/definitions/Pet"
}
},
"default": {
"description": "unexpected error",
"schema": {
"$ref": "#/definitions/ErrorModel"
}
}
}
},
"delete": {
"description": "deletes a single pet based on the ID supplied",
"operationId": "deletePet",
"parameters": [
{
"name": "id",
"in": "path",
"description": "ID of pet to delete",
"required": true,
"type": "integer",
"format": "int64"
}
],
"responses": {
"204": {
"description": "pet deleted"
},
"default": {
"description": "unexpected error",
"schema": {
"$ref": "#/definitions/ErrorModel"
}
}
}
}
}
},
"definitions": {
"Pet": {
"type": "object",
"allOf": [
{
"$ref": "#/definitions/NewPet"
},
{
"required": [
"id"
],
"properties": {
"id": {
"type": "integer",
"format": "int64"
}
}
}
]
},
"NewPet": {
"type": "object",
"required": [
"name"
],
"properties": {
"name": {
"type": "string"
},
"tag": {
"type": "string"
}
}
},
"ErrorModel": {
"type": "object",
"required": [
"code",
"message"
],
"properties": {
"code": {
"type": "integer",
"format": "int32"
},
"message": {
"type": "string"
}
}
}
}
}
"""

* PlantUML sequence diagrams:
"""
@startuml

title ペット
header %page% of %lastpage%
footer Copyright(c) All rights reserved.

autoactivate on
autonumber "<b>[00]"

actor ユーザー as user
entity ペットストアUI as ui
entity ペットストアAPI as api
database データベース as db

== ペットのリストを取得 ==

user -> ui : ペットリストボタンをクリック
ui -> api : findPets
api -> db : SELECT * FROM pets
return
return 200, pet response
return ペットのリストを表示

== ペットの詳細を取得 ==

user -> ui : ペットの詳細ボタンをクリック
ui -> api : findPetById
api -> db : SELECT * FROM pets WHERE id = ${pet_id}
return
return 200, pet response
return ペットの詳細を表示

== ペットを購入 ==

user -> ui : ペットの購入ボタンをクリック
ui -> api : addPet
group transaction
api -> db : transaction
api -> db : INSERT INTO orders VALUES (${user_id}, ${pet_id}, ${datetime}, ${created_at}, ${updated_at})
api -> db : DELETE FROM pets WHERE pet_id = ${pet_id}
api -> db : commit
return
return
return
return
end
return 200, pet response
return 購入完了画面を表示

== ペットの登録を削除 ==

user -> ui : ペットの登録削除ボタンをクリック
ui -> api : deletePet
group transaction
api -> db : DELETE FROM pets WHERE pet_id = (SELECT id FROM orders WHERE pet_id = ${pet_id} AND user_id = ${user_id})
api -> db : DELETE FROM orders WHERE pet_id = ${pet_id} AND user_id = ${user_id}
return
return
end
return 204, pet deleted
return 削除完了画面を表示

== ペットの購入に失敗 ==

user -> ui : ペットの購入ボタンをクリック
ui -> api : addPet
group transaction
api -> db : transaction
api -> db : INSERT INTO orders VALUES (${user_id}, ${pet_id}, ${datetime}, ${created_at}, ${updated_at})
api -> db !! : DELETE FROM pets WHERE pet_id = ${pet_id}
api -> db : rollback
return
return
return
end
return 500, unexpected error
return 購入失敗画面を表示

@enduml
"""

* PlantUML class diagrams:
"""
@startuml

class User {
- id
- name
}

class Pet {
- id
- name
- type
- tag
}

class Order {
- id
- user_id
- pet_id
}

Order "1" -- "*" User
Order "1" -- "*" Pet

@enduml
"""
*/

モデル

前回同様 text-davinci-003 を使用します。

入出力インターフェース

前回同様 Azure OpenAI Studio にある Playground を使用します。

実行手順

前回同様です。

結果

上記プロンプトを 5回実行し、それぞれの結果を取得しました。以下、各結果に関する、プロンプトにおける指示内容に対する形式チェック結果です。(なお、指示項目の洗い出しにあたっては生成したプロンプトの #15 を利用しました。)

# 指示 結果1 結果2 結果3 結果4 結果5
1 MECE (mutually exclusive and collectively exhaustive) test code
2 in Python
3 utilizing the unittest module and patch module of unittest.mock
4 Allowing the user to specify test targets based on the actors of the sequence diagrams
5 Basing test cases on class diagrams first, -
6 then sequence diagrams,
7 and finally OpenAPI specifications for API servers -
8 Defining immutable test data in the setUp method of the test class that inherits unittest.TestCase
9 Specifying additional test data in test cases if needed - - - -
10 Using the @patch('...') decorator for mocking instead of with patch('...') as mock_foo: statement - - - -
11 Applying the spy test double pattern using the assert_called_once_with method of patch.object when applicable
12 Using the with patch.object(...) as mock_foo: statement in spy test double pattern cases - -
13 Following the AAA (Arrange-Act-Assert) style for each test case
14 Being mindful of token limitations and avoiding exceeding them when generating test code -
15 Omitting comments to save the number of characters - - - -

各結果ともに多くの指示内容を満たしています。
指示項目ごとの重要度の高低を加味すると、結果1 が頭一つ抜きん出ています。

以下、個別の結果と、それに対する評価です。なお、上記の形式チェックで確認したポイントについては個別の評価においては言及しません。

結果1

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
import unittest
from unittest.mock import patch, patch.object

class TestPetStoreAPI(unittest.TestCase):
def setUp(self):
self.pet_data = {
'id': 1,
'name': 'Fido',
'type': 'dog',
'tag': 'cute'
}
self.order_data = {
'id': 1,
'user_id': 1,
'pet_id': 1
}

@patch('petstore.api.findPets')
def test_find_pets(self, mock_find_pets):
# Arrange
expected_response = {
'status': 200,
'data': [self.pet_data]
}

# Act
mock_find_pets.return_value = expected_response
response = petstore.api.findPets()

# Assert
self.assertEqual(response, expected_response)
mock_find_pets.assert_called_once_with()

@patch('petstore.api.findPetById')
def test_find_pet_by_id(self, mock_find_pet_by_id):
# Arrange
expected_response = {
'status': 200,
'data': self.pet_data
}

# Act
mock_find_pet_by_id.return_value = expected_response
response = petstore.api.findPetById(self.pet_data['id'])

# Assert
self.assertEqual(response, expected_response)
mock_find_pet_by_id.assert_called_once_with(self.pet_data['id'])

@patch('petstore.api.addPet')
def test_add_pet(self, mock_add_pet):
# Arrange
expected_response = {
'status': 200,
'data': self.pet_data
}

# Act
mock_add_pet.return_value = expected_response
response = petstore.api.addPet(self.pet_data)

# Assert
self.assertEqual(response, expected_response)
mock_add_pet.assert_called_once_with(self.pet_data)

@patch('petstore.api.deletePet')
@patch.object(petstore.api, 'findPetById', return_value={'status': 200, 'data': self.pet_data})
def test_delete_pet(self, mock_find_pet_by_id, mock_delete_pet):
# Arrange
expected_response = {
'status': 204,
'data': None
}

# Act
mock_delete_pet.return_value = expected_response
response = petstore.api.deletePet(self.pet_data['id'])

# Assert
self.assertEqual(response, expected_response)
mock_find_pet_by_id.assert_called_once_with(self.pet_data['id'])
mock_delete_pet.assert_called_once_with(self.pet_data['id'])

@patch('petstore.api.addPet')
@patch.object(petstore.api, 'findPetById', return_value={'status': 200, 'data': self.pet_data})
def test_add_pet_failure(self, mock_find_pet_by_id, mock_add_pet):
# Arrange
expected_response = {
'status': 500,
'data': None
}

# Act
mock_add_pet.return_value = expected_response
response = petstore.api.addPet(self.pet_data)

# Assert
self.assertEqual(response, expected_response)
mock_find_pet_by_id.assert_called_once_with(self.pet_data['id'])
mock_add_pet.assert_called_once_with(self.pet_data)

良い点

  • 特に言及すべき点は無し

改善点

  • テスト対象にテストダブルが使われてしまっている。
  • テストデータ self.order_data が使われていない。

結果2

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
import unittest
from unittest.mock import patch, patch.object

class TestPetStoreAPI(unittest.TestCase):
def setUp(self):
self.pet_list = [
{
'id': 1,
'name': 'Fido',
'type': 'dog',
'tag': 'friendly'
},
{
'id': 2,
'name': 'Fluffy',
'type': 'cat',
'tag': 'playful'
}
]
self.user_id = 1
self.pet_id = 1
self.datetime = '2020-01-01 00:00:00'
self.created_at = '2020-01-01 00:00:00'
self.updated_at = '2020-01-01 00:00:00'

def test_find_pets(self):
# Arrange
expected_response = {
'status': 200,
'data': self.pet_list
}

# Act
with patch('petstore.findPets') as mock_find_pets:
mock_find_pets.return_value = expected_response
response = petstore.findPets()

# Assert
self.assertEqual(response, expected_response)
mock_find_pets.assert_called_once_with()

def test_find_pet_by_id(self):
# Arrange
expected_response = {
'status': 200,
'data': self.pet_list[0]
}

# Act
with patch('petstore.findPetById') as mock_find_pet_by_id:
mock_find_pet_by_id.return_value = expected_response
response = petstore.findPetById(self.pet_id)

# Assert
self.assertEqual(response, expected_response)
mock_find_pet_by_id.assert_called_once_with(self.pet_id)

def test_add_pet_success(self):
# Arrange
expected_response = {
'status': 200,
'data': self.pet_list[0]
}

# Act
with patch('petstore.addPet') as mock_add_pet:
with patch.object(db, 'transaction') as mock_transaction:
with patch.object(db, 'INSERT INTO orders VALUES') as mock_insert_orders:
with patch.object(db, 'DELETE FROM pets WHERE pet_id =') as mock_delete_pets:
with patch.object(db, 'commit') as mock_commit:
mock_add_pet.return_value = expected_response
response = petstore.addPet(self.user_id, self.pet_id, self.datetime, self.created_at, self.updated_at)

# Assert
self.assertEqual(response, expected_response)
mock_add_pet.assert_called_once_with(self.user_id, self.pet_id, self.datetime, self.created_at, self.updated_at)
mock_transaction.assert_called_once_with()
mock_insert_orders.assert_called_once_with(self.user_id, self.pet_id, self.datetime, self.created_at, self.updated_at)
mock_delete_pets.assert_called_once_with(self.pet_id)
mock_commit.assert_called_once_with()

def test_add_pet_failure(self):
# Arrange
expected_response = {
'status': 500,
'data': {
'code': 500,
'message': 'unexpected error'
}
}

# Act
with patch('petstore.addPet') as mock_add_pet:
with patch.object(db, 'transaction') as mock_transaction:
with patch.object(db, 'INSERT INTO orders VALUES') as mock_insert_orders:
with patch.object(db, 'DELETE FROM pets WHERE pet_id =') as mock_delete_pets:
with patch.object(db, 'rollback') as mock_rollback:
mock_add_pet.return_value = expected_response
response = petstore.addPet(self.user_id, self.pet_id, self.datetime, self.created_at, self.updated_at)

# Assert
self.assertEqual(response, expected_response)
mock_add_pet.assert_called_once_with(self.user_id, self.pet_id, self.datetime, self.created_at, self.updated_at)
mock_transaction.assert_called_once_with()
mock_insert_orders.assert_called_once_with(self.user_id, self.pet_id, self.datetime, self.created_at, self.updated_at)
mock_delete_pets.assert_called_once_with(self.pet_id)
mock_rollback.assert_called_once_with()

def test_delete_pet(self):
# Arrange
expected_response = {
'status': 204,
'data':

Completions call failed:
This model’s maximum context length is 4097 tokens, however you requested 4112 tokens (4012 in your prompt; 100 for the completion). Please reduce your prompt; or completion length.

良い点

  • シーケンス図に記述された SQL から datetime, created_at, updated_at を拾って補完した。
  • トランザクションが細かくテストされている。

改善点

    • テスト対象にテストダブルが使われてしまっている。
  • with ステートメントのネストが深い

結果3

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
import unittest
from unittest.mock import patch, patch.object

class TestPetStoreAPI(unittest.TestCase):
def setUp(self):
self.test_pets = [
{
"id": 1,
"name": "Cat",
"type": "cat",
"tag": "feline"
},
{
"id": 2,
"name": "Dog",
"type": "dog",
"tag": "canine"
}
]
self.test_user = {
"id": 1,
"name": "John Doe"
}
self.test_order = {
"id": 1,
"user_id": 1,
"pet_id": 1
}

def test_find_pets(self):
# Arrange
expected_pets = self.test_pets
tags = ["feline", "canine"]
limit = 2

# Act
with patch('petstore.findPets') as mock_find_pets:
mock_find_pets.return_value = expected_pets
actual_pets = petstore.findPets(tags, limit)

# Assert
self.assertEqual(expected_pets, actual_pets)
mock_find_pets.assert_called_once_with(tags, limit)

def test_find_pet_by_id(self):
# Arrange
expected_pet = self.test_pets[0]
pet_id = 1

# Act
with patch('petstore.findPetById') as mock_find_pet_by_id:
mock_find_pet_by_id.return_value = expected_pet
actual_pet = petstore.findPetById(pet_id)

# Assert
self.assertEqual(expected_pet, actual_pet)
mock_find_pet_by_id.assert_called_once_with(pet_id)

def test_add_pet_success(self):
# Arrange
expected_pet = self.test_pets[0]
pet = {
"name": "Cat",
"type": "cat",
"tag": "feline"
}

# Act
with patch('petstore.addPet') as mock_add_pet:
mock_add_pet.return_value = expected_pet
with patch.object(petstore, 'transaction') as mock_transaction:
mock_transaction.return_value = None
actual_pet = petstore.addPet(pet)

# Assert
self.assertEqual(expected_pet, actual_pet)
mock_add_pet.assert_called_once_with(pet)
mock_transaction.assert_called_once()

def test_add_pet_failure(self):
# Arrange
expected_error = {
"code": 500,
"message": "unexpected error"
}
pet = {
"name": "Cat",
"type": "cat",
"tag": "feline"
}

# Act
with patch('petstore.addPet') as mock_add_pet:
mock_add_pet.return_value = expected_error
with patch.object(petstore, 'transaction') as mock_transaction:
mock_transaction.return_value = None
actual_error = petstore.addPet(pet)

# Assert
self.assertEqual(expected_error, actual_error)
mock_add_pet.assert_called_once_with(pet)
mock_transaction.assert_called_once()

def test_delete_pet(self):
# Arrange
expected_response = {
"code": 204,
"message": "pet deleted"
}
pet_id = 1
user_id = 1

# Act
with patch('petstore.deletePet') as mock_delete_pet:
mock_delete_pet.return_value = expected_response
with patch.object(petstore, 'transaction') as mock_transaction:
mock_transaction.return_value = None
actual_response = petstore.deletePet(pet_id, user_id)

# Assert
self.assertEqual(expected_response, actual_response)
mock_delete_pet.assert_called_once_with(pet_id, user_id)
mock_transaction.assert_called_once()

良い点

  • limit を想定して補完している。
  • トランザクションがテストされている。

改善点

  • テスト対象にテストダブルが使われてしまっている。
  • テストデータ self.test_user, self.test_order が使われていない。

結果4

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
import unittest
from unittest.mock import patch, patch.object

class TestPetStoreAPI(unittest.TestCase):
def setUp(self):
self.test_pets = [
{
"id": 1,
"name": "Fido",
"type": "dog",
"tag": "friendly"
},
{
"id": 2,
"name": "Whiskers",
"type": "cat",
"tag": "curious"
}
]
self.test_user = {
"id": 1,
"name": "John Doe"
}
self.test_order = {
"id": 1,
"user_id": 1,
"pet_id": 1
}

def test_find_pets(self):
# Arrange
expected_response = {
"status": 200,
"data": self.test_pets
}

# Act
with patch('petstore.findPets') as mock_find_pets:
mock_find_pets.return_value = expected_response
response = petstore.findPets()

# Assert
self.assertEqual(response, expected_response)
mock_find_pets.assert_called_once_with()

def test_find_pet_by_id(self):
# Arrange
pet_id = 1
expected_response = {
"status": 200,
"data": self.test_pets[0]
}

# Act
with patch('petstore.findPetById') as mock_find_pet_by_id:
mock_find_pet_by_id.return_value = expected_response
response = petstore.findPetById(pet_id)

# Assert
self.assertEqual(response, expected_response)
mock_find_pet_by_id.assert_called_once_with(pet_id)

def test_add_pet_success(self):
# Arrange
pet = self.test_pets[0]
expected_response = {
"status": 200,
"data": pet
}

# Act
with patch('petstore.addPet') as mock_add_pet:
mock_add_pet.return_value = expected_response
response = petstore.addPet(pet)

# Assert
self.assertEqual(response, expected_response)
mock_add_pet.assert_called_once_with(pet)

def test_add_pet_failure(self):
# Arrange
pet = self.test_pets[0]
expected_response = {
"status": 500,
"data": None
}

# Act
with patch('petstore.addPet') as mock_add_pet:
mock_add_pet.return_value = expected_response
response = petstore.addPet(pet)

# Assert
self.assertEqual(response, expected_response)
mock_add_pet.assert_called_once_with(pet)

def test_delete_pet(self):
# Arrange
pet_id = 1
expected_response = {
"status": 204,
"data": None
}

# Act
with patch('petstore.deletePet') as mock_delete_pet:
mock_delete_pet.return_value = expected_response
response = petstore.deletePet(pet_id)

# Assert
self.assertEqual(response, expected_response)
mock_delete_pet.assert_called_once_with(pet_id)

良い点

  • 特に言及すべき点は無し

改善点

  • テスト対象にテストダブルが使われてしまっている。
  • テストデータ self.test_user, self.test_order が使用されていない。

結果5

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
import unittest
from unittest.mock import patch, patch.object

class TestPetStoreAPI(unittest.TestCase):
def setUp(self):
self.pet_data = {
'id': 1,
'name': 'Fido',
'type': 'dog',
'tag': 'friendly'
}
self.order_data = {
'id': 1,
'user_id': 1,
'pet_id': 1
}

def test_find_pets(self):
with patch('petstore.api.find_pets') as mock_find_pets:
mock_find_pets.return_value = [self.pet_data]
result = petstore.api.find_pets()
self.assertEqual(result, [self.pet_data])
mock_find_pets.assert_called_once_with()

def test_find_pet_by_id(self):
with patch('petstore.api.find_pet_by_id') as mock_find_pet_by_id:
mock_find_pet_by_id.return_value = self.pet_data
result = petstore.api.find_pet_by_id(1)
self.assertEqual(result, self.pet_data)
mock_find_pet_by_id.assert_called_once_with(1)

def test_add_pet(self):
with patch('petstore.api.add_pet') as mock_add_pet:
mock_add_pet.return_value = self.pet_data
result = petstore.api.add_pet(self.pet_data)
self.assertEqual(result, self.pet_data)
mock_add_pet.assert_called_once_with(self.pet_data)

def test_delete_pet(self):
with patch('petstore.api.delete_pet') as mock_delete_pet:
mock_delete_pet.return_value = None
result = petstore.api.delete_pet(1)
self.assertIsNone(result)
mock_delete_pet.assert_called_once_with(1)

def test_add_pet_transaction_success(self):
with patch.object(petstore.api, 'add_pet_transaction') as mock_add_pet_transaction:
mock_add_pet_transaction.return_value = self.pet_data
result = petstore.api.add_pet_transaction(self.pet_data, self.order_data)
self.assertEqual(result, self.pet_data)
mock_add_pet_transaction.assert_called_once_with(self.pet_data, self.order_data)

def test_add_pet_transaction_failure(self):
with patch.object(petstore.api, 'add_pet_transaction') as mock_add_pet_transaction:
mock_add_pet_transaction.side_effect = Exception('Transaction failed')
with self.assertRaises(Exception):
petstore.api.add_pet_transaction(self.pet_data, self.order_data)
mock_add_pet_transaction.assert_called_once_with(self.pet_data, self.order_data)

良い点

  • トランザクションがテストされている。
  • トランザクション失敗時に例外が上がることを想定している。

改善点

  • テスト対象にテストダブルが使われてしまっている。
  • 特に言及すべき点は無し

総評

前回よりもプロンプトに忠実な結果を安定的に得られるようになりました。

すべての結果に共通している問題は、テストダブルがテスト対象に対して使われてしまっている点です。
相手がプログラマならこのようなことは起こらないと思いますが、相手が AI だと起こってしまうということに、後から気づかされました。
これは、 AI がテストダブルの書き方は知っていても、その意義までは理解をしていないということだと思われます。(意義を言葉で説明することはできるかもしれませんが。)
あらかじめ PromptGenerator によるプロンプト生成時にインプットしておく必要がありました。

今後この問題が解決したらの話ですが、結果に多少の差異はあるものの、トークン数制限に到達してしまった結果2 を除き、概ねどの結果でもそのまま採用することができるのではないかと思います。
その中でも、結果1 はプロンプトに最も忠実となっており、個人的には結果1 をテストコードとして採用したいと思いました。

まとめ

ChatGPT にプロンプトを生成させる PromptGenerator メソッドを用いて、プロンプトのチューニングを行い、そのプロンプトを用いて前回同様のアプローチでテストコードを生成し、その内容を評価しました。

プロンプトのチューニング方法は様々ありますが、このメソッドによるプロンプトのチューニングを採用するだけで、アウトプットの品質が飛躍的に高まることが確認できました。

前回は「コードスニペット」としての用途にしか使えないと評しましたが、今回のアウトプットはほぼほぼテストコードの体をなしています。Zero-shot でここまでできることに驚きました。
ただし、テストダブルをテスト対象にまで使用しているのは問題です。これは PromptGenerator の段階で解決しないといけません。

インプットしたドキュメントはサンプルレベルでしたが、これを精緻化することで、より実践的なテストコードが生成できることが期待できます。
その際にはやはりトークン数制限が問題になるので、GPT-4 を使用する必要が出てくるでしょう。
また、より再現性を高めるために、プロンプトの生成とテストコードの生成の両方において Temperature をより 0 に近づける試みも、後々必要になってくると思われます。

以上、Azure OpenAI Service を使ってドキュメント (OpenAPI 仕様, PlantUML のシーケンス図およびクラス図) からテストコードを生成し、その内容を評価する取り組みについて、少なくとも GPT-3 を利用したものに関しては、これにて一旦終了したいと思います。


Appendix: 生成されたプロンプト #2 〜 #13

生成されたプロンプト #3

a) Revised Prompt:
How can one generate MECE (mutually exclusive and collectively exhaustive) test code in Python using materials like OpenAI Specifications, PlantUML sequence diagrams, and PlantUML class diagrams for enterprise software development, focusing on utilizing the unittest module and patch module of unittest.mock?

b) Suggestions:

  • Explain how to create organized, structured, and clean test code considering normal, error, corner, and edge cases.
  • Describe the process of understanding software usage from the documents and making assumptions for test data preparation.
  • Provide examples of how test code can be generated from OpenAPI Specifications, PlantUML sequence diagrams, and PlantUML class diagrams, incorporating the unittest module and patch module of unittest.mock.

c) Questions:

  1. Are there any specific examples or scenarios you would like me to include in the prompt?
  2. Would you like me to focus on a specific type of enterprise software or a general approach that can be applied across various types of software?

回答

As my answer for your question 1, there are no examples or scenarios.
As my answer for your question 2, I would like you to focus on a general approach that can be applied across various types of software.

生成されたプロンプト #4

a) Revised Prompt:
How can one generate MECE (mutually exclusive and collectively exhaustive) test code in Python for various types of enterprise software using materials like OpenAI Specifications, PlantUML sequence diagrams, and PlantUML class diagrams, focusing on utilizing the unittest module and patch module of unittest.mock?

b) Suggestions:

  • Explain how to create organized, structured, and clean test code considering normal, error, corner, and edge cases for different types of enterprise software.
  • Describe the process of understanding software usage from the documents and making assumptions for test data preparation in a general context.
  • Provide a general approach for generating test code from OpenAPI Specifications, PlantUML sequence diagrams, and PlantUML class diagrams, incorporating the unittest module and patch module of unittest.mock, that can be applied across various types of software.

c) Questions:

  1. Is there any other aspect of test code generation you would like me to focus on?
  2. Are you satisfied with the revised prompt and suggestions, or would you like me to make further adjustments?

回答

As my answer for your question 1, I would like you to generate test codes step-by-step. You will think of what test case you should generate next for each test case. You will prepare necessary test data step-by-step as well.
As my answer for your question2, no, I still am not. In the prompt, I will give you OpenAPI Specifications in JSON or YAML, PlantUML sequence diagrams in text, and PlantUML class diagrams in text as well. I would like you to prepare fields for these input materials that you will receive.

生成されたプロンプト #5

a) Revised Prompt:
How can one generate MECE (mutually exclusive and collectively exhaustive) test code in Python for various types of enterprise software using materials like OpenAPI Specifications (in JSON or YAML), PlantUML sequence diagrams (in text), and PlantUML class diagrams (in text), focusing on utilizing the unittest module and patch module of unittest.mock, while generating test cases and preparing test data step-by-step?

b) Suggestions:

  • Explain how to create organized, structured, and clean test code considering normal, error, corner, and edge cases for different types of enterprise software, with a step-by-step approach.
  • Describe the process of understanding software usage from the documents and making assumptions for test data preparation in a general context, while preparing necessary test data step-by-step.
  • Provide a general approach for generating test code from OpenAPI Specifications (in JSON or YAML), PlantUML sequence diagrams (in text), and PlantUML class diagrams (in text), incorporating the unittest module and patch module of unittest.mock, that can be applied across various types of software.

c) Questions:

  1. Are there any other specific requirements for the input materials (OpenAPI Specifications, PlantUML sequence diagrams, and PlantUML class diagrams) that you would like me to consider?
  2. Are you satisfied with the revised prompt and suggestions, or would you like me to make further adjustments?

回答

As my answer for your question 1, I would like you to let me specify the test target, meaning what to test. It is usually chosen from actors of the sequence diagrams. As my answer for your question 2, I am still not satisfied with the revised prompt and suggestions. Please make further adjustments.

生成されたプロンプト #6

a) Revised Prompt:
How can one generate MECE (mutually exclusive and collectively exhaustive) test code in Python for various types of enterprise software using materials like OpenAPI Specifications (in JSON or YAML), PlantUML sequence diagrams (in text), and PlantUML class diagrams (in text), focusing on utilizing the unittest module and patch module of unittest.mock, while generating test cases and preparing test data step-by-step, and allowing the user to specify test targets based on the actors of the sequence diagrams?

b) Suggestions:

  • Explain how to create organized, structured, and clean test code considering normal, error, corner, and edge cases for different types of enterprise software, with a step-by-step approach and an option to specify test targets based on the actors of the sequence diagrams.
  • Describe the process of understanding software usage from the documents and making assumptions for test data preparation in a general context, while preparing necessary test data step-by-step and focusing on the specified test targets.
  • Provide a general approach for generating test code from OpenAPI Specifications (in JSON or YAML), PlantUML sequence diagrams (in text), and PlantUML class diagrams (in text), incorporating the unittest module and patch module of unittest.mock, that can be applied across various types of software and allows users to specify test targets based on the actors of the sequence diagrams.

c) Questions:

  1. Are there any other specific requirements or adjustments you would like me to consider for the revised prompt and suggestions?
  2. Are you satisfied with the revised prompt and suggestions, or would you like me to make further adjustments?

回答

As my answer for your question 1, I would like to emphasize that test cases should be based on the class diagrams first. And then, you consider the sequence diagrams. Do not forget considering the OpenAPI specifications when the test target is an API server. As my answer for your question 2, I am still not satisfied with the revised prompt and suggestions, so please make further adjustments.

生成されたプロンプト #7

a) Revised Prompt:
How can one generate MECE (mutually exclusive and collectively exhaustive) test code in Python for various types of enterprise software using materials like OpenAPI Specifications (in JSON or YAML), PlantUML sequence diagrams (in text), and PlantUML class diagrams (in text), focusing on utilizing the unittest module and patch module of unittest.mock, while generating test cases and preparing test data step-by-step, and allowing the user to specify test targets based on the actors of the sequence diagrams? The test cases should be based on class diagrams first, then considering sequence diagrams, and finally taking into account the OpenAPI specifications when the test target is an API server.

b) Suggestions:

  • Explain how to create organized, structured, and clean test code considering normal, error, corner, and edge cases for different types of enterprise software, with a step-by-step approach, an option to specify test targets based on the actors of the sequence diagrams, and a focus on class diagrams followed by sequence diagrams and OpenAPI specifications.
  • Describe the process of understanding software usage from the documents and making assumptions for test data preparation in a general context, while preparing necessary test data step-by-step, focusing on the specified test targets, and considering class diagrams, sequence diagrams, and OpenAPI specifications in the respective order.
  • Provide a general approach for generating test code from OpenAPI Specifications (in JSON or YAML), PlantUML sequence diagrams (in text), and PlantUML class diagrams (in text), incorporating the unittest module and patch module of unittest.mock, that can be applied across various types of software, allows users to specify test targets based on the actors of the sequence diagrams, and follows the order of considering class diagrams, sequence diagrams, and OpenAPI specifications.

c) Questions:

  1. Are there any other specific requirements or adjustments you would like me to consider for the revised prompt and suggestions?
  2. Are you satisfied with the revised prompt and suggestions, or would you like me to make further adjustments?

回答

As my answer for your question 1, I would like you to make the test data immutable. To achieve that, you can define them in setUp method of the test class which inherits unittest.TestCase. As my answer for your question 2, I am still not satisfied with the revised prompt and suggestions, so I expect you to make further adjustments.

生成されたプロンプト #8

a) Revised Prompt:
How can one generate MECE (mutually exclusive and collectively exhaustive) test code in Python for various types of enterprise software using materials like OpenAPI Specifications (in JSON or YAML), PlantUML sequence diagrams (in text), and PlantUML class diagrams (in text), focusing on utilizing the unittest module and patch module of unittest.mock, while generating test cases and preparing test data step-by-step, and allowing the user to specify test targets based on the actors of the sequence diagrams? The test cases should be based on class diagrams first, then considering sequence diagrams, and finally taking into account the OpenAPI specifications when the test target is an API server. The test data should be immutable, defined in the setUp method of the test class that inherits unittest.TestCase.

b) Suggestions:

  • Explain how to create organized, structured, and clean test code considering normal, error, corner, and edge cases for different types of enterprise software, with a step-by-step approach, an option to specify test targets based on the actors of the sequence diagrams, a focus on class diagrams followed by sequence diagrams and OpenAPI specifications, and the use of setUp method to define immutable test data in the test class that inherits unittest.TestCase.
  • Describe the process of understanding software usage from the documents and making assumptions for test data preparation in a general context, while preparing necessary test data step-by-step, focusing on the specified test targets, considering class diagrams, sequence diagrams, and OpenAPI specifications in the respective order, and ensuring the test data is immutable by defining it in the setUp method of the test class.
  • Provide a general approach for generating test code from OpenAPI Specifications (in JSON or YAML), PlantUML sequence diagrams (in text), and PlantUML class diagrams (in text), incorporating the unittest module and patch module of unittest.mock, that can be applied across various types of software, allows users to specify test targets based on the actors of the sequence diagrams, follows the order of considering class diagrams, sequence diagrams, and OpenAPI specifications, and defines immutable test data in the setUp method of the test class that inherits unittest.TestCase.

c) Questions:

  1. Are there any other specific requirements or adjustments you would like me to consider for the revised prompt and suggestions?
  2. Are you satisfied with the revised prompt and suggestions, or would you like me to make further adjustments?

回答

As my answer for your question 1, if the test data which you defined in the setUp method of the test class do not make sense for additional test cases you considered based on the documents, you can specify additional test data in the test cases. As my answer for your question 2, I am still not satisfied with the revised prompt and suggestions, so I would like you to make further adjustments.

生成されたプロンプト #9

a) Revised Prompt:
How can one generate MECE (mutually exclusive and collectively exhaustive) test code in Python for various types of enterprise software using materials like OpenAPI Specifications (in JSON or YAML), PlantUML sequence diagrams (in text), and PlantUML class diagrams (in text), focusing on utilizing the unittest module and patch module of unittest.mock, while generating test cases and preparing test data step-by-step, and allowing the user to specify test targets based on the actors of the sequence diagrams? The test cases should be based on class diagrams first, then considering sequence diagrams, and finally taking into account the OpenAPI specifications when the test target is an API server. The test data should be immutable, defined in the setUp method of the test class that inherits unittest.TestCase, and additional test data can be specified in the test cases if the setUp test data is not sufficient.

b) Suggestions:

  • Explain how to create organized, structured, and clean test code considering normal, error, corner, and edge cases for different types of enterprise software, with a step-by-step approach, an option to specify test targets based on the actors of the sequence diagrams, a focus on class diagrams followed by sequence diagrams and OpenAPI specifications, and the use of setUp method to define immutable test data in the test class that inherits unittest.TestCase, with the possibility to add extra test data in the test cases if necessary.
  • Describe the process of understanding software usage from the documents and making assumptions for test data preparation in a general context, while preparing necessary test data step-by-step, focusing on the specified test targets, considering class diagrams, sequence diagrams, and OpenAPI specifications in the respective order, ensuring the test data is immutable by defining it in the setUp method of the test class, and specifying additional test data in the test cases if required.
  • Provide a general approach for generating test code from OpenAPI Specifications (in JSON or YAML), PlantUML sequence diagrams (in text), and PlantUML class diagrams (in text), incorporating the unittest module and patch module of unittest.mock, that can be applied across various types of software, allows users to specify test targets based on the actors of the sequence diagrams, follows the order of considering class diagrams, sequence diagrams, and OpenAPI specifications, defines immutable test data in the setUp method of the test class that inherits unittest.TestCase, and allows for additional test data to be specified in the test cases when necessary.

c) Questions:

  1. Are there any other specific requirements or adjustments you would like me to consider for the revised prompt and suggestions?
  2. Are you satisfied with the revised prompt and suggestions, or would you like me to make further adjustments?

回答

As my answer for your question 1, when you use patch module of unittest.mock for mocking, I would like you to use decorator like @patch('...') rather thatn with statement like with patch('...') as mock_foo:. As my answer for your question 2, I am still not satisfied with the revised prompt and suggestions, so please make further adjustments.

生成されたプロンプト #10

a) Revised Prompt:
How can one generate MECE (mutually exclusive and collectively exhaustive) test code in Python for various types of enterprise software using materials like OpenAPI Specifications (in JSON or YAML), PlantUML sequence diagrams (in text), and PlantUML class diagrams (in text), focusing on utilizing the unittest module and patch module of unittest.mock, while generating test cases and preparing test data step-by-step, and allowing the user to specify test targets based on the actors of the sequence diagrams? The test cases should be based on class diagrams first, then considering sequence diagrams, and finally taking into account the OpenAPI specifications when the test target is an API server. The test data should be immutable, defined in the setUp method of the test class that inherits unittest.TestCase, and additional test data can be specified in the test cases if the setUp test data is not sufficient. When using the patch module of unittest.mock for mocking, the @patch(‘…’) decorator should be used instead of the with patch(‘…’) as mock_foo: statement.

b) Suggestions:

  • Explain how to create organized, structured, and clean test code considering normal, error, corner, and edge cases for different types of enterprise software, with a step-by-step approach, an option to specify test targets based on the actors of the sequence diagrams, a focus on class diagrams followed by sequence diagrams and OpenAPI specifications, the use of setUp method to define immutable test data in the test class that inherits unittest.TestCase, the possibility to add extra test data in the test cases if necessary, and the use of @patch(‘…’) decorator for mocking.
  • Describe the process of understanding software usage from the documents and making assumptions for test data preparation in a general context, while preparing necessary test data step-by-step, focusing on the specified test targets, considering class diagrams, sequence diagrams, and OpenAPI specifications in the respective order, ensuring the test data is immutable by defining it in the setUp method of the test class, specifying additional test data in the test cases if required, and using the @patch(‘…’) decorator for mocking.
  • Provide a general approach for generating test code from OpenAPI Specifications (in JSON or YAML), PlantUML sequence diagrams (in text), and PlantUML class diagrams (in text), incorporating the unittest module and patch module of unittest.mock, that can be applied across various types of software, allows users to specify test targets based on the actors of the sequence diagrams, follows the order of considering class diagrams, sequence diagrams, and OpenAPI specifications, defines immutable test data in the setUp method of the test class that inherits unittest.TestCase, allows for additional test data to be specified in the test cases when necessary, and employs the @patch(‘…’) decorator for mocking purposes.

c) Questions:

  1. Are there any other specific requirements or adjustments you would like me to consider for the revised prompt and suggestions?
  2. Are you satisfied with the revised prompt and suggestions, or would you like me to make further adjustments?

回答

As my answer for your question 1, I would like you to apply spy test double pattern using assert_called_once_with method of patch.object when the pattern is applicable for the test case. If you use the spy pattern, you can use patch.object under with statement like with patch.object(...) as mock_foo:. As my answer for your question 2, I am still not satisfied with the revised prompt and suggestions, so I would like you to make further adjustments.

生成されたプロンプト #11

a) Revised Prompt:
How can one generate MECE (mutually exclusive and collectively exhaustive) test code in Python for various types of enterprise software using materials like OpenAPI Specifications (in JSON or YAML), PlantUML sequence diagrams (in text), and PlantUML class diagrams (in text), focusing on utilizing the unittest module and patch module of unittest.mock, while generating test cases and preparing test data step-by-step, and allowing the user to specify test targets based on the actors of the sequence diagrams? The test cases should be based on class diagrams first, then considering sequence diagrams, and finally taking into account the OpenAPI specifications when the test target is an API server. The test data should be immutable, defined in the setUp method of the test class that inherits unittest.TestCase, and additional test data can be specified in the test cases if the setUp test data is not sufficient. When using the patch module of unittest.mock for mocking, the @patch(‘…’) decorator should be used instead of the with patch(‘…’) as mock_foo: statement. Additionally, the spy test double pattern should be applied using the assert_called_once_with method of patch.object when applicable for the test case, and the with patch.object(…) as mock_foo: statement can be used in such cases.

b) Suggestions:

  • Explain how to create organized, structured, and clean test code considering normal, error, corner, and edge cases for different types of enterprise software, with a step-by-step approach, an option to specify test targets based on the actors of the sequence diagrams, a focus on class diagrams followed by sequence diagrams and OpenAPI specifications, the use of setUp method to define immutable test data in the test class that inherits unittest.TestCase, the possibility to add extra test data in the test cases if necessary, the use of @patch(‘…’) decorator for mocking, and the application of the spy test double pattern using the assert_called_once_with method of patch.object when applicable.
  • Describe the process of understanding software usage from the documents and making assumptions for test data preparation in a general context, while preparing necessary test data step-by-step, focusing on the specified test targets, considering class diagrams, sequence diagrams, and OpenAPI specifications in the respective order, ensuring the test data is immutable by defining it in the setUp method of the test class, specifying additional test data in the test cases if required, using the @patch(‘…’) decorator for mocking, and applying the spy test double pattern with the assert_called_once_with method of patch.object when applicable.
  • Provide a general approach for generating test code from OpenAPI Specifications (in JSON or YAML), PlantUML sequence diagrams (in text), and PlantUML class diagrams (in text), incorporating the unittest module and patch module of unittest.mock, that can be applied across various types of software, allows users to specify test targets based on the actors of the sequence diagrams, follows the order of considering class diagrams, sequence diagrams, and OpenAPI specifications, defines immutable test data in the setUp method of the test class that inherits unittest.TestCase, allows for additional test data to be specified in the test cases when necessary, employs the @patch(‘…’) decorator for mocking purposes, and uses the spy test double pattern with the assert_called_once_with method of patch.object when appropriate.

c) Questions:

  1. Are there any other specific requirements or adjustments you would like me to consider for the revised prompt and suggestions?
  2. Are you satisfied with the revised prompt and suggestions, or would you like me to make further adjustments?

回答

As my answer for your question 1, I would like you to write each test case in AAA (Arrange-Act-Assert) style. As my answer for your question 2, I am still not satisfied with the revised prompt and suggestions, so please make further adjustments.

生成されたプロンプト #12

a) Revised Prompt:
How can one generate MECE (mutually exclusive and collectively exhaustive) test code in Python for various types of enterprise software using materials like OpenAPI Specifications (in JSON or YAML), PlantUML sequence diagrams (in text), and PlantUML class diagrams (in text), focusing on utilizing the unittest module and patch module of unittest.mock, while generating test cases and preparing test data step-by-step, and allowing the user to specify test targets based on the actors of the sequence diagrams? The test cases should be based on class diagrams first, then considering sequence diagrams, and finally taking into account the OpenAPI specifications when the test target is an API server. The test data should be immutable, defined in the setUp method of the test class that inherits unittest.TestCase, and additional test data can be specified in the test cases if the setUp test data is not sufficient. When using the patch module of unittest.mock for mocking, the @patch(‘…’) decorator should be used instead of the with patch(‘…’) as mock_foo: statement. Additionally, the spy test double pattern should be applied using the assert_called_once_with method of patch.object when applicable for the test case, and the with patch.object(…) as mock_foo: statement can be used in such cases. Each test case should follow the AAA (Arrange-Act-Assert) style.

b) Suggestions:

  • Explain how to create organized, structured, and clean test code considering normal, error, corner, and edge cases for different types of enterprise software, with a step-by-step approach, an option to specify test targets based on the actors of the sequence diagrams, a focus on class diagrams followed by sequence diagrams and OpenAPI specifications, the use of setUp method to define immutable test data in the test class that inherits unittest.TestCase, the possibility to add extra test data in the test cases if necessary, the use of @patch(‘…’) decorator for mocking, the application of the spy test double pattern using the assert_called_once_with method of patch.object when applicable, and following the AAA (Arrange-Act-Assert) style for each test case.
  • Describe the process of understanding software usage from the documents and making assumptions for test data preparation in a general context, while preparing necessary test data step-by-step, focusing on the specified test targets, considering class diagrams, sequence diagrams, and OpenAPI specifications in the respective order, ensuring the test data is immutable by defining it in the setUp method of the test class, specifying additional test data in the test cases if required, using the @patch(‘…’) decorator for mocking, applying the spy test double pattern with the assert_called_once_with method of patch.object when applicable, and adhering to the AAA (Arrange-Act-Assert) style for each test case.
  • Provide a general approach for generating test code from OpenAPI Specifications (in JSON or YAML), PlantUML sequence diagrams (in text), and PlantUML class diagrams (in text), incorporating the unittest module and patch module of unittest.mock, that can be applied across various types of software, allows users to specify test targets based on the actors of the sequence diagrams, follows the order of considering class diagrams, sequence diagrams, and OpenAPI specifications, defines immutable test data in the setUp method of the test class that inherits unittest.TestCase, allows for additional test data to be specified in the test cases when necessary, employs the @patch(‘…’) decorator for mocking purposes, uses the spy test double pattern with the assert_called_once_with method of patch.object when appropriate, and follows the AAA (Arrange-Act-Assert) style for each test case.

c) Questions:

  1. Are there any other specific requirements or adjustments you would like me to consider for the revised prompt and suggestions?
  2. Are you satisfied with the revised prompt and suggestions, or would you like me to make further adjustments?

回答

As my answer for your question 1, if you are aware of current token count, be careful not to exceed the limitation of tokens especially if you are a model of GPT-3. The token length limit is 4096 for GPT-3, 8192 for GPT-4, and 32768 for GPT-4-32k. Not to exceed the limitation, you do not need to output comments. If you are about to exceed the limitation, you can stop outputting next test case and finish the test class. As my answer for your question 2, I am still not satisfied with the revised prompt and suggestions, so I would like you to make further adjustments.

生成されたプロンプト #13

a) Revised Prompt:
How can one generate MECE (mutually exclusive and collectively exhaustive) test code in Python for various types of enterprise software using materials like OpenAPI Specifications (in JSON or YAML), PlantUML sequence diagrams (in text), and PlantUML class diagrams (in text), focusing on utilizing the unittest module and patch module of unittest.mock, while generating test cases and preparing test data step-by-step, and allowing the user to specify test targets based on the actors of the sequence diagrams? The test cases should be based on class diagrams first, then considering sequence diagrams, and finally taking into account the OpenAPI specifications when the test target is an API server. The test data should be immutable, defined in the setUp method of the test class that inherits unittest.TestCase, and additional test data can be specified in the test cases if the setUp test data is not sufficient. When using the patch module of unittest.mock for mocking, the @patch(‘…’) decorator should be used instead of the with patch(‘…’) as mock_foo: statement. Additionally, the spy test double pattern should be applied using the assert_called_once_with method of patch.object when applicable for the test case, and the with patch.object(…) as mock_foo: statement can be used in such cases. Each test case should follow the AAA (Arrange-Act-Assert) style. Be mindful of token limitations and avoid exceeding them when generating test code.

b) Suggestions:

  • Explain how to create organized, structured, and clean test code considering normal, error, corner, and edge cases for different types of enterprise software, with a step-by-step approach, an option to specify test targets based on the actors of the sequence diagrams, a focus on class diagrams followed by sequence diagrams and OpenAPI specifications, the use of setUp method to define immutable test data in the test class that inherits unittest.TestCase, the possibility to add extra test data in the test cases if necessary, the use of @patch(‘…’) decorator for mocking, the application of the spy test double pattern using the assert_called_once_with method of patch.object when applicable, following the AAA (Arrange-Act-Assert) style for each test case, and being mindful of token limitations when generating test code.
  • Describe the process of understanding software usage from the documents and making assumptions for test data preparation in a general context, while preparing necessary test data step-by-step, focusing on the specified test targets, considering class diagrams, sequence diagrams, and OpenAPI specifications in the respective order, ensuring the test data is immutable by defining it in the setUp method of the test class, specifying additional test data in the test cases if required, using the @patch(‘…’) decorator for mocking, applying the spy test double pattern with the assert_called_once_with method of patch.object when applicable, adhering to the AAA (Arrange-Act-Assert) style for each test case, and taking token limitations into account when generating test code.
  • Provide a general approach for generating test code from OpenAPI Specifications (in JSON or YAML), PlantUML sequence diagrams (in text), and PlantUML class diagrams (in text), incorporating the unittest module and patch module of unittest.mock, that can be applied across various types of software, allows users to specify test targets based on the actors of the sequence diagrams, follows the order of considering class diagrams, sequence diagrams, and OpenAPI specifications, defines immutable test data in the setUp method of the test class that inherits unittest.TestCase, allows for additional test data to be specified in the test cases when necessary, employs the @patch(‘…’) decorator for mocking purposes, uses the spy test double pattern with the assert_called_once_with method of patch.object when appropriate, follows the AAA (Arrange-Act-Assert) style for each test case, and remains within token limitations when generating test code.

c) Questions:

  1. Are there any other specific requirements or adjustments you would like me to consider for the revised prompt and suggestions?
  2. Are you satisfied with the revised prompt and suggestions, or would you like me to make further adjustments?

回答

As my answer for your question 1, you do not have to write comments to save the number of characters and not to exceed the token limitation. As my answer for your question 2, I am still not satisfied with the revised prompt and suggestions, so please make further adjustments.