Tesnière introduced the dependency trees, structural order, the concept of dependency and applied his representation concepts in a variety of languages such as French, Greek, Russian, Italian, and so on. In structural order, syntactic relations are presented in a hierarchical manner as opposed to the linear order. He uses “stemmas” to reflect hierarchy.
Today, dependency grammars are divided into two. One study from the tradition that applies this distinction to the dependency grammar is the Prague Dependency Treebank (PDT) developed by the Prague School of Functional and Structural Linguistics. A famous example for the other school of thought that displays linear order is the Penn Treebank that functioned between the years of 1989-1996, containing seven million annotated texts from American English.
In 2005, The Stanford Dependencies developed for the parsing of the English language and to be used in NLP studies and in Stanford Dependency Parser. Stanford Dependencies were acknowledged as the standard for the dependency analyses of English. However, the Stanford Dependency parser could not reach an adequate accuracy when it was used with other dependency schemes. In the following years, the Universal Dependency Treebank (UDT) project pioneered to develop treebanks for languages other than English by transforming the Stanford dependencies into a more inclusive annotation scheme for a diverse set of languages.
The developments in the dependency treebanking made it clear that Turkish language needed a Treebank of its own. The first Turkish language dependency treebank is METU-Sabanci Turkish Treebank. This treebank used a corpus that consisted of 7,262 sentences and included morphological and syntactic annotations. In 2016, this tree-bank was revisited under the name of ITU-METU-Sabancı Treebank (IMST) to reduce the inconsistencies of its earlier version. They succeeded to reduce inconsistencies by applying a new annotation scheme. As a last step, The Bogazici-ITU-METU-Sabancı Treebank (BIMST) is updated as the same corpus. Having a linguistic team of three people, they created a new annotation scheme for IMST and manually re-annotated the data of 5.635 sentences while introducing new dependency relations that were not present in IMST.
You can also see either Java, Python, Cython, C++, C, Swift, Php, or C# repository.
To check if you have a compatible version of Node.js installed, use the following command:
node -v
You can find the latest version of Node.js here.
Install the latest version of Git.
npm install nlptoolkit-dependencyparser
In order to work on code, create a fork from GitHub page. Use Git for cloning the code to your local or below line for Ubuntu:
git clone <your-fork-git-link>
A directory called util will be created. Or you can use below link for exploring the code:
git clone https://github.com/starlangsoftware/turkishdependencyparser-js.git
Steps for opening the cloned project:
- Start IDE
- Select File | Open from main menu
- Choose
DependencyParser-Jsfile - Select open as project option
- Couple of seconds, dependencies will be downloaded.
- main and types are important when this package will be imported.
"main": "dist/index.js",
"types": "dist/index.d.ts",
- Dependencies should be maximum (not only direct but also indirect references should also be given), everything directly in the code should be given here.
"dependencies": {
"nlptoolkit-corpus": "^1.0.12",
"nlptoolkit-dictionary": "^1.0.14",
"nlptoolkit-morphologicalanalysis": "^1.0.19",
"nlptoolkit-xmlparser": "^1.0.7"
}
- Compiler flags currently includes nodeNext for importing.
"compilerOptions": {
"outDir": "dist",
"module": "nodeNext",
"sourceMap": true,
"noImplicitAny": true,
"removeComments": false,
"declaration": true,
},
- tests, node_modules and dist should be excluded.
"exclude": [
"tests",
"node_modules",
"dist"
]
- Should include all ts classes.
export * from "./CategoryType"
export * from "./InterlingualDependencyType"
export * from "./InterlingualRelation"
export * from "./Literal"
- Add data files to the project folder. Subprojects should include all data files of the parent projects.
- Classes should be defined as exported.
export class JCN extends ICSimilarity{
- Do not forget to comment each function.
/**
* Computes JCN wordnet similarity metric between two synsets.
* @param synSet1 First synset
* @param synSet2 Second synset
* @return JCN wordnet similarity metric between two synsets
*/
computeSimilarity(synSet1: SynSet, synSet2: SynSet): number {
- Function names should follow caml case.
setSynSetId(synSetId: string){
- Write getter and setter methods.
getRelation(index: number): Relation{
setName(name: string){
- Use standard javascript test style.
describe('SimilarityPathTest', function() {
describe('SimilarityPathTest', function() {
it('testComputeSimilarity', function() {
let turkish = new WordNet();
let similarityPath = new SimilarityPath(turkish);
assert.strictEqual(32.0, similarityPath.computeSimilarity(turkish.getSynSetWithId("TUR10-0656390"), turkish.getSynSetWithId("TUR10-0600460")));
assert.strictEqual(13.0, similarityPath.computeSimilarity(turkish.getSynSetWithId("TUR10-0412120"), turkish.getSynSetWithId("TUR10-0755370")));
assert.strictEqual(13.0, similarityPath.computeSimilarity(turkish.getSynSetWithId("TUR10-0195110"), turkish.getSynSetWithId("TUR10-0822980")));
});
});
});
- Enumerated types should be declared with enum.
export enum CategoryType {
MATHEMATICS, SPORT, MUSIC, SLANG, BOTANIC,
PLURAL, MARINE, HISTORY, THEOLOGY, ZOOLOGY,
METAPHOR, PSYCHOLOGY, ASTRONOMY, GEOGRAPHY, GRAMMAR,
MILITARY, PHYSICS, PHILOSOPHY, MEDICAL, THEATER,
ECONOMY, LAW, ANATOMY, GEOMETRY, BUSINESS,
PEDAGOGY, TECHNOLOGY, LOGIC, LITERATURE, CINEMA,
TELEVISION, ARCHITECTURE, TECHNICAL, SOCIOLOGY, BIOLOGY,
CHEMISTRY, GEOLOGY, INFORMATICS, PHYSIOLOGY, METEOROLOGY,
MINERALOGY
}
- If there are multiple constructors for a class, define them as constructor1, constructor2, ..., then from the original constructor call these methods.
constructor1(symbol: any){
constructor2(symbol: any, multipleFile: MultipleFile) {
constructor(symbol: any, multipleFile: MultipleFile = undefined) {
if (multipleFile == undefined){
this.constructor1(symbol);
} else {
this.constructor2(symbol, multipleFile);
}
}
- Importing should be done via import method with referencing the node-modules.
import {Corpus} from "nlptoolkit-corpus/dist/Corpus";
import {Sentence} from "nlptoolkit-corpus/dist/Sentence";
- Use xmlparser package for parsing xml files.
var doc = new XmlDocument("test.xml")
doc.parse()
let root = doc.getFirstChild()
let firstChild = root.getFirstChild()

