You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Apr 8, 2025. It is now read-only.
Describe the bug
Answer at beginning of documents cannot be extracted.
Error message
No errors, just the predictions are wrong.
Additional context
I suspect the logit verification is not correct. We do check weather a start/end logit is within the question tokens, this is possibly flawed.
Doesnt predict correctly neither for bert nor for roberta type QA models.
To Reproduce
compare
inferencer = Inferencer.load(model_name_or_path="deepset/roberta-base-squad2", task_type="question_answering")
qa_input = [
{
"qas": ["What is the largest city in Germany?"],
"context": "Berlin is the capital and largest city of Germany by both area and population.",
}
]
results = inferencer.inference_from_dicts(qa_input)
print(results)
vs
qa_input = [
{
"qas": ["What is the largest city in Germany?"],
"context": "Document testing. With short text before Berlin it is still no answer. Berlin is the capital and largest city of Germany by both area and population.",
}
]
vs
qa_input = [
{
"qas": ["What is the largest city in Germany?"],
"context": "Document testing this weird behaviour for all kinds of cases. With short text before Berlin it is still no answer. Berlin is the capital and largest city of Germany by both area and population.",
}
]