You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: docs/Accessing-AI-Services-in-JavaScript.md
+45
Original file line number
Diff line number
Diff line change
@@ -268,6 +268,51 @@ try {
268
268
}
269
269
```
270
270
271
+
### Customizing the default model configuration
272
+
273
+
When retrieving a model using the `getModel()` method, it is possible to provide a `generationConfig` argument to customize the model configuration. The `generationConfig` key needs to contain an object with configuration arguments. These arguments are normalized in a way that works across the different AI services and their APIs.
274
+
275
+
Additionally to `generationConfig`, you can pass a `systemInstruction` argument if you want to provide a custom instruction for how the model should behave. By setting a system instruction, you give the model additional context to understand its tasks, provide more customized responses, and adhere to specific guidelines over the full user interaction with the model.
276
+
277
+
Here is a code example using both `generationConfig` and `systemInstruction`:
systemInstruction:'You are a WordPress expert. You should respond exclusively to prompts and questions about WordPress.',
292
+
}
293
+
);
294
+
295
+
// Generate text using the model.
296
+
} catch ( error ) {
297
+
// Handle the error.
298
+
}
299
+
```
300
+
301
+
Note that not all configuration arguments are supported by every service API. However, a good number of arguments _is_ supported consistently, so here is a list of common configuration arguments that are widely supported:
302
+
303
+
*`stopSequences`_(string)_: Set of character sequences that will stop output generation.
304
+
* Supported by all except `browser`.
305
+
*`maxOutputTokens`_(integer)_: The maximum number of tokens to include in a response candidate.
306
+
* Supported by all except `browser`.
307
+
*`temperature`_(float)_: Floating point value to control the randomness of the output, between 0.0 and 1.0.
308
+
* Supported by all.
309
+
*`topP`_(float)_: The maximum cumulative probability of tokens to consider when sampling.
310
+
* Supported by all except `browser`.
311
+
*`topK`_(integer)_: The maximum number of tokens to consider when sampling.
312
+
* Supported by all except `openai`.
313
+
314
+
Please see the [`Felix_Arntz\AI_Services\Services\API\Types\Generation_Config` class](../includes/Services/API/Types/Generation_Config.php) for all available configuration arguments, and consult the API documentation of the respective provider to see which of them are supported.
Copy file name to clipboardexpand all lines: docs/Accessing-AI-Services-in-PHP.md
+49
Original file line number
Diff line number
Diff line change
@@ -243,6 +243,55 @@ try {
243
243
244
244
It's worth noting that streaming is likely more useful in JavaScript than in PHP, since in PHP there are typically no opportunities to print the iterative responses to the user as they come in. That said, streaming can certainly have value in PHP as well: It is for example used in the plugin's WP-CLI command.
245
245
246
+
### Customizing the default model configuration
247
+
248
+
When retrieving a model using the `get_model()` method, it is possible to provide a `generationConfig` argument to customize the model configuration. The `generationConfig` key needs to contain an instance of the [`Felix_Arntz\AI_Services\Services\API\Types\Generation_Config` class](../includes/Services/API/Types/Generation_Config.php), which allows to provide various model configuration arguments in a normalized way that works across the different AI services and their APIs.
249
+
250
+
Additionally to `generationConfig`, you can pass a `systemInstruction` argument if you want to provide a custom instruction for how the model should behave. By setting a system instruction, you give the model additional context to understand its tasks, provide more customized responses, and adhere to specific guidelines over the full user interaction with the model.
251
+
252
+
Here is a code example using both `generationConfig` and `systemInstruction`:
253
+
254
+
```php
255
+
use Felix_Arntz\AI_Services\Services\API\Enums\AI_Capability;
256
+
use Felix_Arntz\AI_Services\Services\API\Types\Generation_Config;
'systemInstruction' => 'You are a WordPress expert. You should respond exclusively to prompts and questions about WordPress.',
271
+
)
272
+
);
273
+
274
+
// Generate text using the model.
275
+
} catch ( Exception $e ) {
276
+
// Handle the exception.
277
+
}
278
+
```
279
+
280
+
Note that not all configuration arguments are supported by every service API. However, a good number of arguments _is_ supported consistently, so here is a list of common configuration arguments that are widely supported:
281
+
282
+
*`stopSequences`_(string)_: Set of character sequences that will stop output generation.
283
+
* Supported by all.
284
+
*`maxOutputTokens`_(integer)_: The maximum number of tokens to include in a response candidate.
285
+
* Supported by all.
286
+
*`temperature`_(float)_: Floating point value to control the randomness of the output, between 0.0 and 1.0.
287
+
* Supported by all.
288
+
*`topP`_(float)_: The maximum cumulative probability of tokens to consider when sampling.
289
+
* Supported by all.
290
+
*`topK`_(integer)_: The maximum number of tokens to consider when sampling.
291
+
* Supported by all except `openai`.
292
+
293
+
Please see the [`Felix_Arntz\AI_Services\Services\API\Types\Generation_Config` class](../includes/Services/API/Types/Generation_Config.php) for all available configuration arguments, and consult the API documentation of the respective provider to see which of them are supported.
0 commit comments