Skip to content

Commit 9c5f76b

Browse files
committed
Apply auto TOC to all of docs under docs/interpreter/
1 parent 587d4ba commit 9c5f76b

File tree

17 files changed

+179
-116
lines changed

17 files changed

+179
-116
lines changed

docs/interpreter/alluxio.md

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,11 @@ group: manual
66
---
77
{% include JB/setup %}
88

9-
## Alluxio Interpreter for Apache Zeppelin
9+
# Alluxio Interpreter for Apache Zeppelin
10+
11+
<div id="toc"></div>
12+
13+
## Overview
1014
[Alluxio](http://alluxio.org/) is a memory-centric distributed storage system enabling reliable data sharing at memory-speed across cluster frameworks.
1115

1216
## Configuration

docs/interpreter/cassandra.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,9 @@ group: manual
66
---
77
{% include JB/setup %}
88

9-
## Cassandra CQL Interpreter for Apache Zeppelin
9+
# Cassandra CQL Interpreter for Apache Zeppelin
10+
11+
<div id="toc"></div>
1012

1113
<table class="table-configuration">
1214
<tr>

docs/interpreter/elasticsearch.md

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,11 @@ group: manual
66
---
77
{% include JB/setup %}
88

9-
## Elasticsearch Interpreter for Apache Zeppelin
9+
# Elasticsearch Interpreter for Apache Zeppelin
10+
11+
<div id="toc"></div>
12+
13+
## Overview
1014
[Elasticsearch](https://www.elastic.co/products/elasticsearch) is a highly scalable open-source full-text search and analytics engine. It allows you to store, search, and analyze big volumes of data quickly and in near real time. It is generally used as the underlying engine/technology that powers applications that have complex search features and requirements.
1115

1216
## Configuration

docs/interpreter/flink.md

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,11 @@ group: manual
66
---
77
{% include JB/setup %}
88

9-
## Flink interpreter for Apache Zeppelin
9+
# Flink interpreter for Apache Zeppelin
10+
11+
<div id="toc"></div>
12+
13+
## Overview
1014
[Apache Flink](https://flink.apache.org) is an open source platform for distributed stream and batch data processing. Flink’s core is a streaming dataflow engine that provides data distribution, communication, and fault tolerance for distributed computations over data streams. Flink also builds batch processing on top of the streaming engine, overlaying native iteration support, managed memory, and program optimization.
1115

1216
## How to start local Flink cluster, to test the interpreter

docs/interpreter/geode.md

Lines changed: 17 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,11 @@ group: manual
66
---
77
{% include JB/setup %}
88

9-
## Geode/Gemfire OQL Interpreter for Apache Zeppelin
9+
# Geode/Gemfire OQL Interpreter for Apache Zeppelin
10+
11+
<div id="toc"></div>
12+
13+
## Overview
1014
<table class="table-configuration">
1115
<tr>
1216
<th>Name</th>
@@ -33,7 +37,7 @@ This interpreter supports the [Geode](http://geode.incubator.apache.org/) [Objec
3337

3438
This [Video Tutorial](https://www.youtube.com/watch?v=zvzzA9GXu3Q) illustrates some of the features provided by the `Geode Interpreter`.
3539

36-
### Create Interpreter
40+
## Create Interpreter
3741
By default Zeppelin creates one `Geode/OQL` instance. You can remove it or create more instances.
3842

3943
Multiple Geode instances can be created, each configured to the same or different backend Geode cluster. But over time a `Notebook` can have only one Geode interpreter instance `bound`. That means you _cannot_ connect to different Geode clusters in the same `Notebook`. This is a known Zeppelin limitation.
@@ -42,10 +46,10 @@ To create new Geode instance open the `Interpreter` section and click the `+Crea
4246

4347
> Note: The `Name` of the instance is used only to distinguish the instances while binding them to the `Notebook`. The `Name` is irrelevant inside the `Notebook`. In the `Notebook` you must use `%geode.oql` tag.
4448
45-
### Bind to Notebook
49+
## Bind to Notebook
4650
In the `Notebook` click on the `settings` icon in the top right corner. The select/deselect the interpreters to be bound with the `Notebook`.
4751

48-
### Configuration
52+
## Configuration
4953
You can modify the configuration of the Geode from the `Interpreter` section. The Geode interpreter expresses the following properties:
5054

5155
<table class="table-configuration">
@@ -71,12 +75,12 @@ You can modify the configuration of the Geode from the `Interpreter` section. T
7175
</tr>
7276
</table>
7377

74-
### How to use
78+
## How to use
7579
> *Tip 1: Use (CTRL + .) for OQL auto-completion.*
7680
7781
> *Tip 2: Always start the paragraphs with the full `%geode.oql` prefix tag! The short notation: `%geode` would still be able run the OQL queries but the syntax highlighting and the auto-completions will be disabled.*
7882
79-
#### Create / Destroy Regions
83+
### Create / Destroy Regions
8084
The OQL specification does not support [Geode Regions](https://cwiki.apache.org/confluence/display/GEODE/Index#Index-MainConceptsandComponents) mutation operations. To `create`/`destroy` regions one should use the [GFSH](http://geode-docs.cfapps.io/docs/tools_modules/gfsh/chapter_overview.html) shell tool instead. In the following it is assumed that the GFSH is colocated with Zeppelin server.
8185

8286
```bash
@@ -97,7 +101,7 @@ EOF
97101

98102
Above snippet re-creates two regions: `regionEmployee` and `regionCompany`. Note that you have to explicitly specify the locator host and port. The values should match those you have used in the Geode Interpreter configuration. Comprehensive list of [GFSH Commands by Functional Area](http://geode-docs.cfapps.io/docs/tools_modules/gfsh/gfsh_quick_reference.html).
99103

100-
#### Basic OQL
104+
### Basic OQL
101105
```sql
102106
%geode.oql
103107
SELECT count(*) FROM /regionEmployee
@@ -136,7 +140,7 @@ SELECT e.key, e.value FROM /regionEmployee.entrySet e
136140

137141
> Note: You can have multiple queries in the same paragraph but only the result from the first is displayed. [[1](https://issues.apache.org/jira/browse/ZEPPELIN-178)], [[2](https://issues.apache.org/jira/browse/ZEPPELIN-212)].
138142
139-
#### GFSH Commands From The Shell
143+
### GFSH Commands From The Shell
140144
Use the Shell Interpreter (`%sh`) to run OQL commands form the command line:
141145

142146
```bash
@@ -145,15 +149,18 @@ source /etc/geode/conf/geode-env.sh
145149
gfsh -e "connect" -e "list members"
146150
```
147151

148-
#### Apply Zeppelin Dynamic Forms
152+
### Apply Zeppelin Dynamic Forms
149153
You can leverage [Zeppelin Dynamic Form](../manual/dynamicform.html) inside your OQL queries. You can use both the `text input` and `select form` parameterization features
150154

151155
```sql
152156
%geode.oql
153157
SELECT * FROM /regionEmployee e WHERE e.employeeId > ${Id}
154158
```
155159

156-
#### Geode REST API
160+
### Auto-completion
161+
The Geode Interpreter provides a basic auto-completion functionality. On `(Ctrl+.)` it list the most relevant suggestions in a pop-up window.
162+
163+
## Geode REST API
157164
To list the defined regions you can use the [Geode REST API](http://geode-docs.cfapps.io/docs/geode_rest/chapter_overview.html):
158165

159166
```
@@ -182,6 +189,3 @@ http://<geode server hostname>phd1.localdomain:8484/gemfire-api/v1/
182189
http-service-port=8484
183190
start-dev-rest-api=true
184191
```
185-
186-
### Auto-completion
187-
The Geode Interpreter provides a basic auto-completion functionality. On `(Ctrl+.)` it list the most relevant suggestions in a pop-up window.

docs/interpreter/hbase.md

Lines changed: 9 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -6,16 +6,22 @@ group: manual
66
---
77
{% include JB/setup %}
88

9-
## HBase Shell Interpreter for Apache Zeppelin
9+
# HBase Shell Interpreter for Apache Zeppelin
10+
11+
<div id="toc"></div>
12+
13+
## Overview
1014
[HBase Shell](http://hbase.apache.org/book.html#shell) is a JRuby IRB client for Apache HBase. This interpreter provides all capabilities of Apache HBase shell within Apache Zeppelin. The interpreter assumes that Apache HBase client software has been installed and it can connect to the Apache HBase cluster from the machine on where Apache Zeppelin is installed.
11-
To get start with HBase, please see [HBase Quickstart](https://hbase.apache.org/book.html#quickstart)
15+
To get start with HBase, please see [HBase Quickstart](https://hbase.apache.org/book.html#quickstart).
1216

1317
## HBase release supported
1418
By default, Zeppelin is built against HBase 1.0.x releases. To work with HBase 1.1.x releases, use the following build command:
19+
1520
```bash
1621
# HBase 1.1.4
1722
mvn clean package -DskipTests -Phadoop-2.6 -Dhadoop.version=2.6.0 -P build-distr -Dhbase.hbase.version=1.1.4 -Dhbase.hadoop.version=2.6.0
1823
```
24+
1925
To work with HBase 1.2.0+, use the following build command:
2026

2127
```bash
@@ -94,4 +100,4 @@ And then to put data into that table
94100
put 'test', 'row1', 'cf:a', 'value1'
95101
```
96102

97-
For more information on all commands available, refer to [HBase shell commands](https://learnhbase.wordpress.com/2013/03/02/hbase-shell-commands/)
103+
For more information on all commands available, refer to [HBase shell commands](https://learnhbase.wordpress.com/2013/03/02/hbase-shell-commands/).

docs/interpreter/hdfs.md

Lines changed: 10 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -6,8 +6,11 @@ group: manual
66
---
77
{% include JB/setup %}
88

9-
## HDFS File System Interpreter for Apache Zeppelin
9+
# HDFS File System Interpreter for Apache Zeppelin
1010

11+
<div id="toc"></div>
12+
13+
## Overview
1114
[Hadoop File System](http://hadoop.apache.org/) is a distributed, fault tolerant file system part of the hadoop project and is often used as storage for distributed processing engines like [Hadoop MapReduce](http://hadoop.apache.org/) and [Apache Spark](http://spark.apache.org/) or underlying file systems like [Alluxio](http://www.alluxio.org/).
1215

1316
## Configuration
@@ -44,13 +47,17 @@ It supports the basic shell file commands applied to HDFS, it currently only sup
4447

4548
> **Tip :** Use ( Ctrl + . ) for autocompletion.
4649
47-
### Create Interpreter
50+
## Create Interpreter
4851

4952
In a notebook, to enable the **HDFS** interpreter, click the **Gear** icon and select **HDFS**.
5053

5154

52-
#### WebHDFS REST API
55+
## WebHDFS REST API
5356
You can confirm that you're able to access the WebHDFS API by running a curl command against the WebHDFS end point provided to the interpreter.
5457

5558
Here is an example:
59+
60+
```bash
5661
$> curl "http://localhost:50070/webhdfs/v1/?op=LISTSTATUS"
62+
```
63+

docs/interpreter/hive.md

Lines changed: 7 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -6,8 +6,9 @@ group: manual
66
---
77
{% include JB/setup %}
88

9-
## Hive Interpreter for Apache Zeppelin
10-
The [Apache Hive](https://hive.apache.org/) ™ data warehouse software facilitates querying and managing large datasets residing in distributed storage. Hive provides a mechanism to project structure onto this data and query the data using a SQL-like language called HiveQL. At the same time this language also allows traditional map/reduce programmers to plug in their custom mappers and reducers when it is inconvenient or inefficient to express this logic in HiveQL.
9+
# Hive Interpreter for Apache Zeppelin
10+
11+
<div id="toc"></div>
1112

1213
## Important Notice
1314
Hive Interpreter will be deprecated and merged into JDBC Interpreter. You can use Hive Interpreter by using JDBC Interpreter with same functionality. See the example below of settings and dependencies.
@@ -52,7 +53,6 @@ Hive Interpreter will be deprecated and merged into JDBC Interpreter. You can us
5253
</tr>
5354
</table>
5455

55-
----
5656

5757
### Configuration
5858
<table class="table-configuration">
@@ -115,6 +115,10 @@ Hive Interpreter will be deprecated and merged into JDBC Interpreter. You can us
115115

116116
This interpreter provides multiple configuration with `${prefix}`. User can set a multiple connection properties by this prefix. It can be used like `%hive(${prefix})`.
117117

118+
## Overview
119+
120+
The [Apache Hive](https://hive.apache.org/) ™ data warehouse software facilitates querying and managing large datasets residing in distributed storage. Hive provides a mechanism to project structure onto this data and query the data using a SQL-like language called HiveQL. At the same time this language also allows traditional map/reduce programmers to plug in their custom mappers and reducers when it is inconvenient or inefficient to express this logic in HiveQL.
121+
118122
## How to use
119123
Basically, you can use
120124

docs/interpreter/ignite.md

Lines changed: 8 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -6,16 +6,18 @@ group: manual
66
---
77
{% include JB/setup %}
88

9-
## Ignite Interpreter for Apache Zeppelin
9+
# Ignite Interpreter for Apache Zeppelin
1010

11-
### Overview
11+
<div id="toc"></div>
12+
13+
## Overview
1214
[Apache Ignite](https://ignite.apache.org/) In-Memory Data Fabric is a high-performance, integrated and distributed in-memory platform for computing and transacting on large-scale data sets in real-time, orders of magnitude faster than possible with traditional disk-based or flash technologies.
1315

1416
![Apache Ignite](../assets/themes/zeppelin/img/docs-img/ignite-logo.png)
1517

1618
You can use Zeppelin to retrieve distributed data from cache using Ignite SQL interpreter. Moreover, Ignite interpreter allows you to execute any Scala code in cases when SQL doesn't fit to your requirements. For example, you can populate data into your caches or execute distributed computations.
1719

18-
### Installing and Running Ignite example
20+
## Installing and Running Ignite example
1921
In order to use Ignite interpreters, you may install Apache Ignite in some simple steps:
2022

2123
1. Download Ignite [source release](https://ignite.apache.org/download.html#sources) or [binary release](https://ignite.apache.org/download.html#binaries) whatever you want. But you must download Ignite as the same version of Zeppelin's. If it is not, you can't use scala code on Zeppelin. You can find ignite version in Zeppelin at the pom.xml which is placed under `path/to/your-Zeppelin/ignite/pom.xml` ( Of course, in Zeppelin source release ). Please check `ignite.version` .<br>Currently, Zeppelin provides ignite only in Zeppelin source release. So, if you download Zeppelin binary release( `zeppelin-0.5.0-incubating-bin-spark-xxx-hadoop-xx` ), you can not use ignite interpreter on Zeppelin. We are planning to include ignite in a future binary release.
@@ -31,7 +33,7 @@ In order to use Ignite interpreters, you may install Apache Ignite in some simpl
3133
$ nohup java -jar </path/to/your Jar file name>
3234
```
3335

34-
### Configuring Ignite Interpreter
36+
## Configuring Ignite Interpreter
3537
At the "Interpreters" menu, you may edit Ignite interpreter or create new one. Zeppelin provides these properties for Ignite.
3638

3739
<table class="table-configuration">
@@ -69,14 +71,14 @@ At the "Interpreters" menu, you may edit Ignite interpreter or create new one. Z
6971

7072
![Configuration of Ignite Interpreter](../assets/themes/zeppelin/img/docs-img/ignite-interpreter-setting.png)
7173

72-
### Interpreter Binding for Zeppelin Notebook
74+
## How to use
7375
After configuring Ignite interpreter, create your own notebook. Then you can bind interpreters like below image.
7476

7577
![Binding Interpreters](../assets/themes/zeppelin/img/docs-img/ignite-interpreter-binding.png)
7678

7779
For more interpreter binding information see [here](http://zeppelin.apache.org/docs/manual/interpreters.html).
7880

79-
### How to use Ignite SQL interpreter
81+
### Ignite SQL interpreter
8082
In order to execute SQL query, use ` %ignite.ignitesql ` prefix. <br>
8183
Supposing you are running `org.apache.ignite.examples.streaming.wordcount.StreamWords`, then you can use "words" cache( Of course you have to specify this cache name to the Ignite interpreter setting section `ignite.jdbc.url` of Zeppelin ).
8284
For example, you can select top 10 words in the words cache using the following query

0 commit comments

Comments
 (0)