Skip to content

Commit 930058a

Browse files
committed
Expand documentation to explain how to customize model configuration.
1 parent e7ae611 commit 930058a

File tree

3 files changed

+100
-1
lines changed

3 files changed

+100
-1
lines changed

docs/Accessing-AI-Services-in-JavaScript.md

+45
Original file line numberDiff line numberDiff line change
@@ -268,6 +268,51 @@ try {
268268
}
269269
```
270270

271+
### Customizing the default model configuration
272+
273+
When retrieving a model using the `getModel()` method, it is possible to provide a `generationConfig` argument to customize the model configuration. The `generationConfig` key needs to contain an object with configuration arguments. These arguments are normalized in a way that works across the different AI services and their APIs.
274+
275+
Additionally to `generationConfig`, you can pass a `systemInstruction` argument if you want to provide a custom instruction for how the model should behave. By setting a system instruction, you give the model additional context to understand its tasks, provide more customized responses, and adhere to specific guidelines over the full user interaction with the model.
276+
277+
Here is a code example using both `generationConfig` and `systemInstruction`:
278+
279+
```js
280+
const enums = aiServices.ai.enums;
281+
282+
try {
283+
const model = service.getModel(
284+
{
285+
feature: 'my-test-feature',
286+
capabilities: [ enums.AiCapability.TEXT_GENERATION ],
287+
generationConfig: {
288+
maxOutputTokens: 128,
289+
temperature: 0.2,
290+
},
291+
systemInstruction: 'You are a WordPress expert. You should respond exclusively to prompts and questions about WordPress.',
292+
}
293+
);
294+
295+
// Generate text using the model.
296+
} catch ( error ) {
297+
// Handle the error.
298+
}
299+
```
300+
301+
Note that not all configuration arguments are supported by every service API. However, a good number of arguments _is_ supported consistently, so here is a list of common configuration arguments that are widely supported:
302+
303+
* `stopSequences` _(string)_: Set of character sequences that will stop output generation.
304+
* Supported by all except `browser`.
305+
* `maxOutputTokens` _(integer)_: The maximum number of tokens to include in a response candidate.
306+
* Supported by all except `browser`.
307+
* `temperature` _(float)_: Floating point value to control the randomness of the output, between 0.0 and 1.0.
308+
* Supported by all.
309+
* `topP` _(float)_: The maximum cumulative probability of tokens to consider when sampling.
310+
* Supported by all except `browser`.
311+
* `topK` _(integer)_: The maximum number of tokens to consider when sampling.
312+
* Supported by all except `openai`.
313+
314+
Please see the [`Felix_Arntz\AI_Services\Services\API\Types\Generation_Config` class](../includes/Services/API/Types/Generation_Config.php) for all available configuration arguments, and consult the API documentation of the respective provider to see which of them are supported.
315+
271316
## Generating image content using an AI service
272317

273318
Coming soon.

docs/Accessing-AI-Services-in-PHP.md

+49
Original file line numberDiff line numberDiff line change
@@ -243,6 +243,55 @@ try {
243243

244244
It's worth noting that streaming is likely more useful in JavaScript than in PHP, since in PHP there are typically no opportunities to print the iterative responses to the user as they come in. That said, streaming can certainly have value in PHP as well: It is for example used in the plugin's WP-CLI command.
245245

246+
### Customizing the default model configuration
247+
248+
When retrieving a model using the `get_model()` method, it is possible to provide a `generationConfig` argument to customize the model configuration. The `generationConfig` key needs to contain an instance of the [`Felix_Arntz\AI_Services\Services\API\Types\Generation_Config` class](../includes/Services/API/Types/Generation_Config.php), which allows to provide various model configuration arguments in a normalized way that works across the different AI services and their APIs.
249+
250+
Additionally to `generationConfig`, you can pass a `systemInstruction` argument if you want to provide a custom instruction for how the model should behave. By setting a system instruction, you give the model additional context to understand its tasks, provide more customized responses, and adhere to specific guidelines over the full user interaction with the model.
251+
252+
Here is a code example using both `generationConfig` and `systemInstruction`:
253+
254+
```php
255+
use Felix_Arntz\AI_Services\Services\API\Enums\AI_Capability;
256+
use Felix_Arntz\AI_Services\Services\API\Types\Generation_Config;
257+
258+
try {
259+
$model = $service
260+
->get_model(
261+
array(
262+
'feature' => 'my-test-feature',
263+
'capabilities' => array( AI_Capability::TEXT_GENERATION ),
264+
'generationConfig' => Generation_Config::from_array(
265+
array(
266+
'maxOutputTokens' => 128,
267+
'temperature' => 0.2,
268+
)
269+
),
270+
'systemInstruction' => 'You are a WordPress expert. You should respond exclusively to prompts and questions about WordPress.',
271+
)
272+
);
273+
274+
// Generate text using the model.
275+
} catch ( Exception $e ) {
276+
// Handle the exception.
277+
}
278+
```
279+
280+
Note that not all configuration arguments are supported by every service API. However, a good number of arguments _is_ supported consistently, so here is a list of common configuration arguments that are widely supported:
281+
282+
* `stopSequences` _(string)_: Set of character sequences that will stop output generation.
283+
* Supported by all.
284+
* `maxOutputTokens` _(integer)_: The maximum number of tokens to include in a response candidate.
285+
* Supported by all.
286+
* `temperature` _(float)_: Floating point value to control the randomness of the output, between 0.0 and 1.0.
287+
* Supported by all.
288+
* `topP` _(float)_: The maximum cumulative probability of tokens to consider when sampling.
289+
* Supported by all.
290+
* `topK` _(integer)_: The maximum number of tokens to consider when sampling.
291+
* Supported by all except `openai`.
292+
293+
Please see the [`Felix_Arntz\AI_Services\Services\API\Types\Generation_Config` class](../includes/Services/API/Types/Generation_Config.php) for all available configuration arguments, and consult the API documentation of the respective provider to see which of them are supported.
294+
246295
## Generating image content using an AI service
247296

248297
Coming soon.

includes/Services/API/Types/Generation_Config.php

+6-1
Original file line numberDiff line numberDiff line change
@@ -279,7 +279,12 @@ public static function get_json_schema(): array {
279279
'minimum' => 1,
280280
),
281281
'temperature' => array(
282-
'description' => __( 'Floating point value to control the randomness of the output.', 'ai-services' ),
282+
'description' => sprintf(
283+
/* translators: 1: Minimum value, 2: Maximum value */
284+
__( 'Floating point value to control the randomness of the output, between %1$s and %2$s.', 'ai-services' ),
285+
'0.0',
286+
'1.0'
287+
),
283288
'type' => 'number',
284289
'minimum' => 0.0,
285290
'maximum' => 1.0,

0 commit comments

Comments
 (0)