Skip to content

Comments

2026 Ruff Formatter Style#22735

Merged
ntBre merged 8 commits intobrent/0.15.0from
2026-style
Feb 3, 2026
Merged

2026 Ruff Formatter Style#22735
ntBre merged 8 commits intobrent/0.15.0from
2026-style

Conversation

@dylwil3
Copy link
Collaborator

@dylwil3 dylwil3 commented Jan 19, 2026

@dylwil3 dylwil3 added this to the v0.15 milestone Jan 19, 2026
@dylwil3 dylwil3 added the do-not-merge Do not merge this pull request label Jan 19, 2026
@MichaReiser
Copy link
Member

@dylwil3 where do you want to have the design decision conversation? All here? On the individual PRs? You don't want to discuss at all?

@dylwil3 dylwil3 requested a review from MichaReiser as a code owner January 23, 2026 14:43
@dylwil3 dylwil3 added breaking Breaking API change formatter Related to the formatter labels Jan 23, 2026
@astral-sh-bot
Copy link

astral-sh-bot bot commented Jan 23, 2026

ruff-ecosystem results

Formatter (stable)

ℹ️ ecosystem check detected format changes. (+594 -133 lines in 329 files in 22 projects; 33 projects unchanged)

PostHog/HouseWatch (+2 -0 lines across 2 files)

housewatch/api/async_migration.py~L52

 
     @action(methods=["POST"], detail=True)
     def trigger(self, request, **kwargs):
+
         migration = self.get_object()
 
         migration.status = MigrationStatus.Starting

housewatch/async_migrations/runner.py~L27

 def start_async_migration(
     migration: AsyncMigration, ignore_posthog_version=False
 ) -> bool:
+
     if migration.status not in [MigrationStatus.Starting, MigrationStatus.NotStarted]:
         logger.error(f"Initial check failed for async migration {migration.name}")
         return False

RasaHQ/rasa (+158 -6 lines across 89 files)

data/test_classes/custom_slots.py~L28

         value_reset_delay: Optional[int] = None,
         influence_conversation: bool = True,
     ) -> None:
+
         super().__init__(
             name=name,
             initial_value=initial_value,

examples/reminderbot/actions/actions.py~L28

         tracker: Tracker,
         domain: Dict[Text, Any],
     ) -> List[Dict[Text, Any]]:
+
         dispatcher.utter_message("I will remind you in 5 seconds.")
 
         date = datetime.datetime.now() + datetime.timedelta(seconds=5)

examples/reminderbot/actions/actions.py~L56

         tracker: Tracker,
         domain: Dict[Text, Any],
     ) -> List[Dict[Text, Any]]:
+
         name = next(tracker.get_slot("PERSON"), "someone")
         dispatcher.utter_message(f"Remember to call {name}!")
 

examples/reminderbot/actions/actions.py~L71

     async def run(
         self, dispatcher, tracker: Tracker, domain: Dict[Text, Any]
     ) -> List[Dict[Text, Any]]:
+
         conversation_id = tracker.sender_id
 
         dispatcher.utter_message(f"The ID of this conversation is '{conversation_id}'.")

examples/reminderbot/actions/actions.py~L98

         tracker: Tracker,
         domain: Dict[Text, Any],
     ) -> List[Dict[Text, Any]]:
+
         plant = next(tracker.get_latest_entity_values("plant"), "someone")
         dispatcher.utter_message(f"Your {plant} needs some water!")
 

examples/reminderbot/actions/actions.py~L113

     async def run(
         self, dispatcher, tracker: Tracker, domain: Dict[Text, Any]
     ) -> List[Dict[Text, Any]]:
+
         dispatcher.utter_message("Okay, I'll cancel all your reminders.")
 
         # Cancel all reminders

examples/reminderbot/callback_server.py~L4

 
 
 def create_app() -> Sanic:
+
     bot_app = Sanic("callback_server", configure_logging=False)
 
     @bot_app.post("/bot")

rasa/cli/export.py~L177

 
 
 async def _export_trackers(args: argparse.Namespace) -> None:
+
     _assert_max_timestamp_is_greater_than_min_timestamp(args)
 
     endpoints = rasa.core.utils.read_endpoints_from_path(args.endpoints)

rasa/cli/run.py~L58

 
 
 def _validate_model_path(model_path: Text, parameter: Text, default: Text) -> Text:
+
     if model_path is not None and not os.path.exists(model_path):
         reason_str = f"'{model_path}' not found."
         if model_path is None:

rasa/core/actions/action.py~L658

 
 class RemoteAction(Action):
     def __init__(self, name: Text, action_endpoint: Optional[EndpointConfig]) -> None:
+
         self._name = name
         self.action_endpoint = action_endpoint
 

rasa/core/agent.py~L503

         return await self.handle_message(msg)
 
     def _set_fingerprint(self, fingerprint: Optional[Text] = None) -> None:
+
         if fingerprint:
             self.fingerprint = fingerprint
         else:

rasa/core/channels/botframework.py~L48

         bot: Text,
         service_url: Text,
     ) -> None:
+
         service_url = (
             f"{service_url}/" if not service_url.endswith("/") else service_url
         )

rasa/core/channels/callback.py~L22

         return "callback"
 
     def __init__(self, endpoint: EndpointConfig) -> None:
+
         self.callback_endpoint = endpoint
         super().__init__()
 

rasa/core/channels/facebook.py~L33

         page_access_token: Text,
         on_new_message: Callable[[UserMessage], Awaitable[Any]],
     ) -> None:
+
         self.on_new_message = on_new_message
         self.client = MessengerClient(page_access_token)
         self.last_message: Dict[Text, Any] = {}

rasa/core/channels/facebook.py~L171

         return "facebook"
 
     def __init__(self, messenger_client: MessengerClient) -> None:
+
         self.messenger_client = messenger_client
         super().__init__()
 

rasa/core/channels/facebook.py~L349

     def blueprint(
         self, on_new_message: Callable[[UserMessage], Awaitable[Any]]
     ) -> Blueprint:
+
         fb_webhook = Blueprint("fb_webhook", __name__)
 
         # noinspection PyUnusedLocal

rasa/core/channels/hangouts.py~L40

 
     @staticmethod
     def _text_card(message: Dict[Text, Any]) -> Dict:
+
         card = {
             "cards": [
                 {

rasa/core/channels/hangouts.py~L192

 
     @classmethod
     def from_credentials(cls, credentials: Optional[Dict[Text, Any]]) -> InputChannel:
+
         if credentials:
             return cls(credentials.get("project_id"))
 

rasa/core/channels/hangouts.py~L204

         hangouts_room_added_intent_name: Optional[Text] = "/room_added",
         hangouts_removed_intent_name: Optional[Text] = "/bot_removed",
     ) -> None:
+
         self.project_id = project_id
         self.hangouts_user_added_intent_name = hangouts_user_added_intent_name
         self.hangouts_room_added_intent_name = hangouts_room_added_intent_name

rasa/core/channels/hangouts.py~L226

 
     @staticmethod
     def _extract_sender(req: Request) -> Text:
+
         if req.json["type"] == "MESSAGE":
             return req.json["message"]["sender"]["displayName"]
 

rasa/core/channels/hangouts.py~L233

 
     # noinspection PyMethodMayBeStatic
     def _extract_message(self, req: Request) -> Text:
+
         if req.json["type"] == "MESSAGE":
             message = req.json["message"]["text"]
 

rasa/core/channels/hangouts.py~L292

 
         @custom_webhook.route("/webhook", methods=["POST"])
         async def receive(request: Request) -> HTTPResponse:
+
             if self.project_id:
                 token = request.headers.get("Authorization", "").replace("Bearer ", "")
                 self._check_token(token)

rasa/core/channels/rocketchat.py~L114

         )
 
     def __init__(self, user: Text, password: Text, server_url: Text) -> None:
+
         self.user = user
         self.password = password
         self.server_url = server_url

rasa/core/channels/webexteams.py~L78

         sender_id: Optional[Text],
         metadata: Optional[Dict],
     ) -> Any:
+
         try:
             out_channel = self.get_output_channel()
             user_msg = UserMessage(

rasa/core/featurizers/single_state_featurizer.py~L186

         precomputations: Optional[MessageContainerForCoreFeaturization],
         sparse: bool = False,
     ) -> Dict[Text, List[Features]]:
+
         # Remove entities from possible attributes
         attributes = set(
             attribute for attribute in sub_state.keys() if attribute != ENTITIES

rasa/core/nlg/callback.py~L62

     """
 
     def __init__(self, endpoint_config: EndpointConfig) -> None:
+
         self.nlg_endpoint = endpoint_config
 
     async def generate(

rasa/core/policies/rule_policy.py~L951

     def _find_action_from_loop_happy_path(
         tracker: DialogueStateTracker,
     ) -> Tuple[Optional[Text], Optional[Text]]:
+
         active_loop_name = tracker.active_loop_name
         if active_loop_name is None:
             return None, None

rasa/core/policies/ted_policy.py~L1926

         text_output: tf.Tensor,
         text_sequence_lengths: tf.Tensor,
     ) -> tf.Tensor:
+
         text_transformed, text_mask, text_sequence_lengths = self._reshape_for_entities(
             tf_batch_data,
             dialogue_transformer_output,

rasa/core/policies/ted_policy.py~L2128

         text_output: tf.Tensor,
         text_sequence_lengths: tf.Tensor,
     ) -> Tuple[tf.Tensor, tf.Tensor]:
+
         text_transformed, _, text_sequence_lengths = self._reshape_for_entities(
             tf_batch_data,
             dialogue_transformer_output,

rasa/core/processor.py~L785

     async def _handle_message_with_tracker(
         self, message: UserMessage, tracker: DialogueStateTracker
     ) -> None:
+
         if message.parse_data:
             parse_data = message.parse_data
         else:

rasa/core/test.py~L702

     event: ActionExecuted,
     fail_on_prediction_errors: bool,
 ) -> Tuple[EvaluationStore, PolicyPrediction, Optional[EntityEvaluationResult]]:
+
     action_executed_eval_store = EvaluationStore()
 
     expected_action_name = event.action_name

rasa/core/test.py~L820

     List[Dict[Text, Any]],
     List[EntityEvaluationResult],
 ]:
+
     processor = agent.processor
     if agent.processor is not None:
         processor = agent.processor

rasa/engine/recipes/default_recipe.py~L668

         preprocessors: List[Text],
         train_nodes: Dict[Text, SchemaNode],
     ) -> Dict[Text, SchemaNode]:
+
         predict_config = copy.deepcopy(config)
         predict_nodes = {}
 

rasa/engine/storage/local_model_storage.py~L221

 
     @staticmethod
     def _persist_metadata(metadata: ModelMetadata, temporary_directory: Path) -> None:
+
         rasa.shared.utils.io.dump_obj_as_json_to_file(
             temporary_directory / MODEL_ARCHIVE_METADATA_FILE, metadata.as_dict()
         )

rasa/engine/validation.py~L485

     parent_return_type: TypeAnnotation,
     required_type: TypeAnnotation,
 ) -> None:
+
     if not typing_utils.issubtype(parent_return_type, required_type):
         parent_node_text = ""
         if parent_node:

rasa/nlu/classifiers/diet_classifier.py~L506

     def _extract_features(
         self, message: Message, attribute: Text
     ) -> Dict[Text, Union[scipy.sparse.spmatrix, np.ndarray]]:
+
         (
             sparse_sequence_features,
             sparse_sentence_features,

rasa/nlu/classifiers/diet_classifier.py~L775

         sparse_feature_sizes: Dict[Text, Dict[Text, List[int]]],
         label_attribute: Optional[Text] = None,
     ) -> Dict[Text, Dict[Text, List[int]]]:
+
         if label_attribute in sparse_feature_sizes:
             del sparse_feature_sizes[label_attribute]
         return sparse_feature_sizes

rasa/nlu/classifiers/diet_classifier.py~L1260

         config: Dict[Text, Any],
         finetune_mode: bool,
     ) -> "RasaModel":
+
         predict_data_example = RasaModelData(
             label_key=model_data_example.label_key,
             data={

rasa/nlu/classifiers/diet_classifier.py~L1502

         sequence_feature_lengths: tf.Tensor,
         name: Text,
     ) -> tf.Tensor:
+
         x, _ = self._tf_layers[f"feature_combining_layer.{name}"](
             (sequence_features, sentence_features, sequence_feature_lengths),
             training=self._training,

rasa/nlu/classifiers/diet_classifier.py~L1683

         return loss
 
     def _update_label_metrics(self, loss: tf.Tensor, acc: tf.Tensor) -> None:
+
         self.intent_loss.update_state(loss)
         self.intent_acc.update_state(acc)
 

rasa/nlu/classifiers/diet_classifier.py~L1841

         combined_sequence_sentence_feature_lengths: tf.Tensor,
         text_transformed: tf.Tensor,
     ) -> Dict[Text, tf.Tensor]:
+
         if self.all_labels_embed is None:
             raise ValueError(
                 "The model was not prepared for prediction. "

rasa/nlu/extractors/crf_entity_extractor.py~L101

         CRFEntityExtractorOptions.SUFFIX1: lambda crf_token: crf_token.text[-1:],
         CRFEntityExtractorOptions.BIAS: lambda _: "bias",
         CRFEntityExtractorOptions.POS: lambda crf_token: crf_token.pos_tag,
-        CRFEntityExtractorOptions.POS2: lambda crf_token: crf_token.pos_tag[:2]
-        if crf_token.pos_tag is not None
-        else None,
+        CRFEntityExtractorOptions.POS2: lambda crf_token: (
+            crf_token.pos_tag[:2] if crf_token.pos_tag is not None else None
+        ),
         CRFEntityExtractorOptions.UPPER: lambda crf_token: crf_token.text.isupper(),
         CRFEntityExtractorOptions.DIGIT: lambda crf_token: crf_token.text.isdigit(),
         CRFEntityExtractorOptions.PATTERN: lambda crf_token: crf_token.pattern,

rasa/nlu/featurizers/dense_featurizer/convert_featurizer.py~L323

         return texts
 
     def _sentence_encoding_of_text(self, batch: List[Text]) -> np.ndarray:
+
         return self.sentence_encoding_signature(tf.convert_to_tensor(batch))[
             "default"
         ].numpy()
 
     def _sequence_encoding_of_text(self, batch: List[Text]) -> np.ndarray:
+
         return self.sequence_encoding_signature(tf.convert_to_tensor(batch))[
             "sequence_encoding"
         ].numpy()

rasa/nlu/featurizers/dense_featurizer/convert_featurizer.py~L407

             )
 
     def _tokenize(self, sentence: Text) -> Any:
+
         return self.tokenize_signature(tf.convert_to_tensor([sentence]))[
             "default"
         ].numpy()

rasa/nlu/featurizers/sparse_featurizer/count_vectors_featurizer.py~L98

         return ["sklearn"]
 
     def _load_count_vect_params(self) -> None:
+
         # Use shared vocabulary between text and all other attributes of Message
         self.use_shared_vocab = self._config["use_shared_vocab"]
 

rasa/nlu/featurizers/sparse_featurizer/lexical_syntactic_featurizer.py~L86

         "suffix2": lambda token: token.text[-2:],
         "suffix1": lambda token: token.text[-1:],
         "pos": lambda token: token.data.get(POS_TAG_KEY, None),
-        "pos2": lambda token: token.data.get(POS_TAG_KEY, [])[:2]
-        if POS_TAG_KEY in token.data
-        else None,
+        "pos2": lambda token: (
+            token.data.get(POS_TAG_KEY, [])[:2] if POS_TAG_KEY in token.data else None
+        ),
         "upper": lambda token: token.text.isupper(),
         "digit": lambda token: token.text.isdigit(),
     }

rasa/nlu/persistor.py~L95

 
     @staticmethod
     def _tar_name(model_name: Text, include_extension: bool = True) -> Text:
+
         ext = ".tar.gz" if include_extension else ""
         return f"{model_name}{ext}"
 

rasa/nlu/selectors/response_selector.py~L625

         config: Dict[Text, Any],
         finetune_mode: bool = False,
     ) -> "RasaModel":
+
         predict_data_example = RasaModelData(
             label_key=model_data_example.label_key,
             data={

rasa/nlu/selectors/response_selector.py~L721

             logger.debug(f"  {metric} ({name})")
 
     def _update_label_metrics(self, loss: tf.Tensor, acc: tf.Tensor) -> None:
+
         self.response_loss.update_state(loss)
         self.response_acc.update_state(acc)
 

rasa/nlu/test.py~L1568

 
 
 def _contains_entity_labels(entity_results: List[EntityEvaluationResult]) -> bool:
+
     for result in entity_results:
         if result.entity_targets or result.entity_predictions:
             return True

rasa/server.py~L234

         async def decorated(
             request: Request, *args: Any, **kwargs: Any
         ) -> response.HTTPResponse:
+
             provided = request.args.get("token", None)
 
             # noinspection PyProtectedMember

rasa/shared/core/events.py~L317

     def from_parameters(
         parameters: Dict[Text, Any], default: Optional[Type["Event"]] = None
     ) -> Optional["Event"]:
+
         event_name = parameters.get("event")
         if event_name is None:
             return None

rasa/shared/core/events.py~L1020

     def _from_story_string(
         cls, parameters: Dict[Text, Any]
     ) -> Optional[List["SlotSet"]]:
+
         slots = []
         for slot_key, slot_val in parameters.items():
             slots.append(SlotSet(slot_key, slot_val))

rasa/shared/core/events.py~L1221

     def _from_story_string(
         cls, parameters: Dict[Text, Any]
     ) -> Optional[List["ReminderScheduled"]]:
+
         trigger_date_time = parser.parse(parameters.get("date_time"))
 
         return [

rasa/shared/core/events.py~L1472

     def _from_story_string(
         cls, parameters: Dict[Text, Any]
     ) -> Optional[List["FollowupAction"]]:
+
         return [
             FollowupAction(
                 parameters.get("name"),

rasa/shared/core/training_data/story_reader/story_reader.py~L111

     def _add_checkpoint(
         self, name: Text, conditions: Optional[Dict[Text, Any]]
     ) -> None:
+
         # Ensure story part already has a name
         if not self.current_step_builder:
             raise StoryParseError(

rasa/shared/core/training_data/story_reader/yaml_story_reader.py~L539

         return default_value
 
     def _parse_action(self, step: Dict[Text, Any]) -> None:
+
         action_name = step.get(KEY_ACTION, "")
         if not action_name:
             rasa.shared.utils.io.raise_warning(

rasa/shared/core/training_data/story_reader/yaml_story_reader.py~L560

         self._add_event(ActiveLoop.type_name, {LOOP_NAME: active_loop_name})
 
     def _parse_checkpoint(self, step: Dict[Text, Any]) -> None:
+
         checkpoint_name = step.get(KEY_CHECKPOINT, "")
         slots = step.get(KEY_CHECKPOINT_SLOTS, [])
 

rasa/shared/importers/importer.py~L477

         return original.merge(e2e_domain)
 
     def _get_domain_with_e2e_actions(self) -> Domain:
+
         stories = self.get_stories()
 
         additional_e2e_action_names = set()

rasa/shared/importers/rasa.py~L26

         domain_path: Optional[Text] = None,
         training_data_paths: Optional[Union[List[Text], Text]] = None,
     ):
+
         self._domain_path = domain_path
 
         self._nlu_files = rasa.shared.data.get_data_files(

rasa/shared/nlu/training_data/formats/rasa_yaml.py~L105

         )
 
     def _parse_nlu(self, nlu_data: Optional[List[Dict[Text, Any]]]) -> None:
+
         if not nlu_data:
             return
 

rasa/shared/nlu/training_data/training_data.py~L47

         lookup_tables: Optional[List[Dict[Text, Any]]] = None,
         responses: Optional[Dict[Text, List[Dict[Text, Any]]]] = None,
     ) -> None:
+
         if training_examples:
             self.training_examples = self.sanitize_examples(training_examples)
         else:

rasa/utils/tensorflow/models.py~L889

         tag_name: Text,
         entity_tags: Optional[tf.Tensor] = None,
     ) -> Tuple[tf.Tensor, tf.Tensor, tf.Tensor]:
+
         tag_ids = tf.cast(tag_ids[:, :, 0], tf.int32)
 
         if entity_tags is not None:

tests/cli/test_rasa_data.py~L50

 
 
 def test_data_convert_nlu_json(run_in_simple_project: Callable[..., RunResult]):
+
     result = run_in_simple_project(
         "data",
         "convert",

tests/cli/test_rasa_data.py~L69

 def test_data_convert_nlu_yml(
     run: Callable[..., RunResult], tmp_path: Path, request: FixtureRequest
 ):
+
     target_file = tmp_path / "out.yml"
 
     # The request rootdir is required as the `testdir` fixture in `run` changes the

tests/cli/test_rasa_train.py~L119

 def test_train_no_domain_exists(
     run_in_simple_project: Callable[..., RunResult], tmp_path: Path
 ) -> None:
+
     os.remove("domain.yml")
     run_in_simple_project(
         "train",

tests/cli/test_rasa_x.py~L146

 
 
 def test_rasa_x_raises_warning_and_exits_without_production_flag():
+
     args = argparse.Namespace(loglevel=None, log_file=None, production=None)
     with pytest.raises(SystemExit):
         with pytest.warns(

tests/core/channels/test_hangouts.py~L9

 
 
 def test_hangouts_channel():
+
     from rasa.core.channels.hangouts import HangoutsInput
     import rasa.core
 

tests/core/channels/test_hangouts.py~L161

 
 @pytest.mark.asyncio
 async def test_hangouts_output_channel_functions():
+
     from rasa.core.channels.hangouts import HangoutsOutput
 
     output_channel = HangoutsOutput()

tests/core/channels/test_twilio_voice.py~L15

 
 
 async def test_twilio_voice_twiml_response_text():
+
     inputs = {
         "initial_prompt": "hello",
         "reprompt_fallback_phrase": "i didn't get that",

tests/core/channels/test_twilio_voice.py~L43

 
 
 async def test_twilio_voice_twiml_response_buttons():
+
     inputs = {
         "initial_prompt": "hello",
         "reprompt_fallback_phrase": "i didn't get that",

tests/core/channels/test_twilio_voice.py~L156

 
 
 async def test_twilio_voice_remove_image():
+
     with pytest.warns(UserWarning):
         output_channel = TwilioVoiceCollectingOutputChannel()
         await output_channel.send_response(

tests/core/channels/test_twilio_voice.py~L165

 
 
 async def test_twilio_voice_keep_image_text():
+
     output_channel = TwilioVoiceCollectingOutputChannel()
     await output_channel.send_response(
         recipient_id="Chuck Norris",

tests/core/channels/test_twilio_voice.py~L175

 
 
 async def test_twilio_emoji_warning():
+
     with pytest.warns(UserWarning):
         output_channel = TwilioVoiceCollectingOutputChannel()
         await output_channel.send_response(

tests/core/channels/test_twilio_voice.py~L183

 
 
 async def test_twilio_voice_multiple_responses():
+
     inputs = {
         "initial_prompt": "hello",
         "reprompt_fallback_phrase": "i didn't get that",

tests/core/evaluation/test_marker.py~L651

     ],
 )
 def test_marker_from_path_adds_special_or_marker(tmp_path: Path, configs: Any):
+
     yaml_file = tmp_path / "config.yml"
     rasa.shared.utils.io.write_yaml(data=configs, target=yaml_file)
     loaded = Marker.from_path(tmp_path)

tests/core/evaluation/test_marker_stats.py~L258

 
 @pytest.mark.parametrize("seed", [2345, 5654, 2345234])
 def test_per_session_statistics_to_csv(tmp_path: Path, seed: int):
+
     rng = np.random.default_rng(seed=seed)
     (
         per_session_results,

tests/core/featurizers/test_precomputation.py~L497

     collector: CoreFeaturizationCollector,
     messages_with_unique_lookup_key: List[Message],
 ):
+
     messages = messages_with_unique_lookup_key
 
     # pass as training data

tests/core/featurizers/test_single_state_featurizers.py~L418

 
 
 def test_encode_entities__with_entity_roles_and_groups():
+
     # create fake message that has been tokenized and entities have been extracted
     text = "I am flying from London to Paris"
     tokens = [

tests/core/featurizers/test_single_state_featurizers.py~L475

 
 
 def test_encode_entities__with_bilou_entity_roles_and_groups():
+
     # Instantiate domain and configure the single state featurizer for this domain.
     # Note that there are 2 entity tags here.
     entity_tags = ["city", f"city{ENTITY_LABEL_SEPARATOR}to"]

tests/core/policies/test_ted_policy.py~L101

 class TestTEDPolicy(PolicyTestCollection):
     @staticmethod
     def _policy_class_to_test() -> Type[TEDPolicy]:
+
         return TEDPolicy
 
     def test_train_model_checkpointing(

tests/core/policies/test_unexpected_intent_policy.py~L56

 class TestUnexpecTEDIntentPolicy(TestTEDPolicy):
     @staticmethod
     def _policy_class_to_test() -> Type[UnexpecTEDIntentPolicy]:
+
         return UnexpecTEDIntentPolicy
 
     @pytest.fixture(scope="class")

tests/core/policies/test_unexpected_intent_policy.py~L98

     def test_label_data_assembly(
         self, trained_policy: UnexpecTEDIntentPolicy, default_domain: Domain
     ):
+
         # Construct input data
         state_featurizer = trained_policy.featurizer.state_featurizer
         encoded_all_labels = state_featurizer.encode_all_labels(

tests/core/policies/test_unexpected_intent_policy.py~L846

             all_similarities: np.array,
             label_index: int,
         ):
+
             expected_score = all_similarities[0][label_index]
             expected_threshold = (
                 all_thresholds[label_index] if label_index in all_thresholds else None

tests/core/test_actions.py~L787

 
 
 async def test_response_channel_specific(default_nlg, default_tracker, domain: Domain):
+
     output_channel = SlackBot("DummyToken", "General")
 
     events = await ActionBotResponse("utter_channel").run(

tests/core/test_broker.py~L99

 
 
 async def test_pika_raise_connection_exception(monkeypatch: MonkeyPatch):
+
     monkeypatch.setattr(
         PikaEventBroker, "connect", AsyncMock(side_effect=ChannelNotFoundEntity())
     )

tests/core/test_broker.py~L326

 
 
 async def test_create_pika_invalid_port():
+
     cfg = EndpointConfig(
         username="username", password="password", type="pika", port="PORT"
     )

tests/core/test_nlg.py~L12

 
 
 def nlg_app(base_url="/"):
+
     app = Sanic("test_nlg")
 
     @app.route(base_url, methods=["POST"])

tests/core/test_policies.py~L424

         default_domain: Domain,
         stories_path: Text,
     ):
+
         execution_context = dataclasses.replace(execution_context, is_finetuning=True)
         loaded_policy = MemoizationPolicy.load(
             trained_policy.config, model_storage, resource, execution_context

tests/core/test_test.py~L198

     monkeypatch: MonkeyPatch,
     moodbot_domain_path: Path,
 ) -> Callable[[Path, bool], Coroutine]:
+
     # We need `RulePolicy` to predict the correct actions
     # in a particular conversation context as seen during training.
     # Since it can get affected by `action_unlikely_intent` being triggered in

tests/core/test_tracker_stores.py~L179

 
 
 def test_redis_tracker_store_invalid_key_prefix(domain: Domain):
+
     test_invalid_key_prefix = "$$ &!"
 
     tracker_store = RedisTrackerStore(

tests/engine/recipes/test_default_recipe.py~L593

 
 
 def test_dump_config_missing_file(tmp_path: Path, capsys: CaptureFixture):
+
     config_path = tmp_path / "non_existent_config.yml"
 
     config = rasa.shared.utils.io.read_config_file(str(SOME_CONFIG))

tests/engine/test_caching.py~L50

         model_storage: ModelStorage,
         output_fingerprint: Text,
     ) -> "TestCacheableOutput":
+
         value = rasa.shared.utils.io.read_json_file(directory / "cached.json")
 
         return cls(value, cache_dir=directory)

tests/engine/test_validation.py~L93

     language: Optional[Text] = None,
     is_train_graph: bool = True,
 ) -> GraphModelConfiguration:
+
     parent_node = {}
     if parent:
         parent_node = {

tests/engine/training/test_components.py~L17

 
 
 def test_cached_component_returns_value_from_cache(default_model_storage: ModelStorage):
+
     cached_output = CacheableText("Cache me!!")
 
     node = GraphNode(

tests/engine/training/test_components.py~L105

 def test_fingerprint_component_hit(
     default_model_storage: ModelStorage, temp_cache: TrainingCache
 ):
+
     cached_output = CacheableText("Cache me!!")
     output_fingerprint = uuid.uuid4().hex
 

tests/engine/training/test_components.py~L159

 def test_fingerprint_component_miss(
     default_model_storage: ModelStorage, temp_cache: TrainingCache
 ):
+
     component_config = {"x": 1}
 
     node = GraphNode(

tests/engine/training/test_graph_trainer.py~L112

     train_with_schema: Callable,
     spy_on_all_components: Callable,
 ):
+
     input_file = tmp_path / "input_file.txt"
     input_file.write_text("3")
 

tests/engine/training/test_graph_trainer.py~L212

     train_with_schema: Callable,
     spy_on_all_components: Callable,
 ):
+
     input_file = tmp_path / "input_file.txt"
     input_file.write_text("3")
 

tests/engine/training/test_graph_trainer.py~L267

     train_with_schema: Callable,
     spy_on_all_components: Callable,
 ):
+
     input_file = tmp_path / "input_file.txt"
     input_file.write_text("3")
 

tests/engine/training/test_graph_trainer.py~L382

     train_with_schema: Callable,
     caplog: LogCaptureFixture,
 ):
+
     input_file = tmp_path / "input_file.txt"
     input_file.write_text("3")
 

tests/examples/test_example_bots_training_data.py~L65

     raise_slot_warning: bool,
     msg: Optional[Text],
 ):
+
     importer = TrainingDataImporter.load_from_config(
         config_file, domain_file, [data_folder]
     )

tests/graph_components/validators/test_default_recipe_validator.py~L705

 def test_core_warn_if_data_but_no_policy(
     monkeypatch: MonkeyPatch, policy_type: Optional[Type[Policy]]
 ):
+
     importer = TrainingDataImporter.load_from_dict(
         domain_path="data/test_e2ebot/domain.yml",
         training_data_paths=[

tests/graph_components/validators/test_default_recipe_validator.py~L838

 def test_core_raise_if_a_rule_policy_is_incompatible_with_domain(
     monkeypatch: MonkeyPatch,
 ):
+
     domain = Domain.empty()
 
     num_instances = 2

tests/graph_components/validators/test_default_recipe_validator.py~L883

     num_duplicates: bool,
     priority: int,
 ):
+
     assert len(policy_types) >= priority + num_duplicates, (
         f"This tests needs at least {priority + num_duplicates} many types."
     )

tests/graph_components/validators/test_default_recipe_validator.py~L947

 
 @pytest.mark.parametrize("policy_type_consuming_rule_data", [RulePolicy])
 def test_core_warn_if_rule_data_missing(policy_type_consuming_rule_data: Type[Policy]):
+
     importer = TrainingDataImporter.load_from_dict(
         domain_path="data/test_e2ebot/domain.yml",
         training_data_paths=[

tests/graph_components/validators/test_default_recipe_validator.py~L976

 def test_core_warn_if_rule_data_unused(
     policy_type_not_consuming_rule_data: Type[Policy],
 ):
+
     importer = TrainingDataImporter.load_from_dict(
         domain_path="data/test_moodbot/domain.yml",
         training_data_paths=[

tests/nlu/classifiers/test_diet_classifier.py~L132

         message_text: Text = "Rasa is great!",
         expect_intent: bool = True,
     ) -> Message:
+
         if not pipeline:
             pipeline = [
                 {"component": WhitespaceTokenizer},

tests/nlu/classifiers/test_regex_message_handler.py~L38

 def test_process_does_not_do_anything(
     regex_message_handler: RegexMessageHandler, text: Text
 ):
+
     message = Message(
         data={TEXT: text, INTENT: "bla"},
         features=[

tests/nlu/classifiers/test_regex_message_handler.py~L77

 def test_regex_message_handler_adds_extractor_name(
     regex_message_handler: RegexMessageHandler, text: Text
 ):
+
     message = Message(
         data={TEXT: text, INTENT: "bla"},
         features=[

tests/nlu/conftest.py~L39

     def inner(
         pipeline: List[Dict[Text, Any]], training_data: Union[Text, TrainingData]
     ) -> Tuple[TrainingData, List[GraphComponent]]:
+
         if isinstance(training_data, str):
             importer = RasaFileImporter(training_data_paths=[training_data])
             training_data: TrainingData = importer.get_nlu_data()

tests/nlu/conftest.py~L75

 @pytest.fixture()
 def process_message(default_model_storage: ModelStorage) -> Callable[..., Message]:
     def inner(loaded_pipeline: List[GraphComponent], message: Message) -> Message:
+
         for component in loaded_pipeline:
             component.process([message])
 

tests/nlu/extractors/test_crf_entity_extractor.py~L132

     spacy_nlp_component: SpacyNLP,
     spacy_model: SpacyModel,
 ):
+
     crf_extractor = crf_entity_extractor(config_params)
 
     importer = RasaFileImporter(training_data_paths=["data/examples/rasa"])

tests/nlu/extractors/test_mitie_entity_extractor.py~L73

     mitie_model: MitieModel,
     with_trainable_examples: bool,
 ):
+
     # some texts where last token is a city
     texts_ending_with_city = ["Bert lives in Berlin", "Ernie asks where is Bielefeld"]
 

tests/nlu/extractors/test_regex_entity_extractor.py~L278

 def test_process_does_not_overwrite_any_entities(
     create_or_load_extractor: Callable[..., RegexEntityExtractor],
 ):
+
     pre_existing_entity = {
         ENTITY_ATTRIBUTE_TYPE: "person",
         ENTITY_ATTRIBUTE_VALUE: "Max",

tests/nlu/featurizers/test_lexical_syntactic_featurizer.py~L235

     resource_lexical_syntactic_featurizer: Resource,
     feature_config: List[Text],
 ) -> Callable[..., LexicalSyntacticFeaturizer]:
+
     config = {"alias": "lsf", "features": feature_config}
     featurizer = create_lexical_syntactic_featurizer(config)
 

tests/nlu/featurizers/test_lexical_syntactic_featurizer.py~L296

     feature_config: Dict[Text, Any],
     expected_features: np.ndarray,
 ):
+
     featurizer = create_lexical_syntactic_featurizer(
         {"alias": "lsf", "features": feature_config}
     )

<a href='https://github.com/RasaHQ/rasa/blob/c4069568b4fe2adb5d5a1e55d17ce8cb9dda27fc/tests/nlu/featuri

... (truncated 4033 lines) ...

@ntBre ntBre mentioned this pull request Jan 26, 2026
3 tasks
@dylwil3 dylwil3 force-pushed the 2026-style branch 3 times, most recently from 29d8081 to d0ad41b Compare January 26, 2026 17:47
@dylwil3 dylwil3 changed the base branch from main to brent/0.15.0 January 26, 2026 17:47
@ntBre ntBre force-pushed the brent/0.15.0 branch 2 times, most recently from e47647a to 30d21e2 Compare January 28, 2026 15:06
@dylwil3
Copy link
Collaborator Author

dylwil3 commented Jan 29, 2026

After some discussion we have decided to wait on stabilizing the fluent layout for method chains, mainly because there is some community feedback around breaking at top-level methods for modules like numpy, pandas, and polars.

ntBre and others added 7 commits February 2, 2026 12:18
Summary
--

Release branch for Ruff 0.15.0

Breaking changes
--
-

Behavior changes
--
-

Recoded rules
--
-

Deprecated rules
--
-

Changed rules
--
-

Removed rules
--
-

Stabilized rules
--
-

New or improved fixes
--
-

Deferred stabilizations
--
-

TODOs
--

- [ ] Drop ~~empty~~ first commit (random whitespace change to get a baseline ecosystem check executable)
- [ ] Merge with rebase-merge (**don't squash merge!!!!**)
…#22739)

Don't get excited! Opening PRs for all preview styles to decide.
@ntBre ntBre merged commit 6212b64 into brent/0.15.0 Feb 3, 2026
63 of 67 checks passed
@ntBre ntBre deleted the 2026-style branch February 3, 2026 14:39
ntBre added a commit that referenced this pull request Feb 3, 2026
Styles stabilized:

-
[`avoid_parens_for_long_as_captures`](#22743)
-
[`remove_parens_around_except_types`](#22741)
-
[`allow_newline_after_block_open`](#22742)
- [`no_chaperone_for_escaped_quote_in_triple_quoted_docstring
`](#22739)
- [`blank_line_before_decorated_class_in_stub
`](#22740)
-
[`parenthesize_lambda_bodies`](#22744)

To-do:

- [x] Change target branch to 0.15 release branch
- [x] Update documentation
- [x] Remove empty commit

---------

Co-authored-by: Brent Westbrook <[email protected]>
ntBre added a commit that referenced this pull request Feb 3, 2026
Styles stabilized:

-
[`avoid_parens_for_long_as_captures`](#22743)
-
[`remove_parens_around_except_types`](#22741)
-
[`allow_newline_after_block_open`](#22742)
- [`no_chaperone_for_escaped_quote_in_triple_quoted_docstring
`](#22739)
- [`blank_line_before_decorated_class_in_stub
`](#22740)
-
[`parenthesize_lambda_bodies`](#22744)

To-do:

- [x] Change target branch to 0.15 release branch
- [x] Update documentation
- [x] Remove empty commit

---------

Co-authored-by: Brent Westbrook <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

breaking Breaking API change do-not-merge Do not merge this pull request formatter Related to the formatter

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants