You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/interpreter/alluxio.md
+5-1Lines changed: 5 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,11 @@ group: manual
6
6
---
7
7
{% include JB/setup %}
8
8
9
-
## Alluxio Interpreter for Apache Zeppelin
9
+
# Alluxio Interpreter for Apache Zeppelin
10
+
11
+
<divid="toc"></div>
12
+
13
+
## Overview
10
14
[Alluxio](http://alluxio.org/) is a memory-centric distributed storage system enabling reliable data sharing at memory-speed across cluster frameworks.
Copy file name to clipboardExpand all lines: docs/interpreter/elasticsearch.md
+5-1Lines changed: 5 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,11 @@ group: manual
6
6
---
7
7
{% include JB/setup %}
8
8
9
-
## Elasticsearch Interpreter for Apache Zeppelin
9
+
# Elasticsearch Interpreter for Apache Zeppelin
10
+
11
+
<divid="toc"></div>
12
+
13
+
## Overview
10
14
[Elasticsearch](https://www.elastic.co/products/elasticsearch) is a highly scalable open-source full-text search and analytics engine. It allows you to store, search, and analyze big volumes of data quickly and in near real time. It is generally used as the underlying engine/technology that powers applications that have complex search features and requirements.
Copy file name to clipboardExpand all lines: docs/interpreter/flink.md
+5-1Lines changed: 5 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,11 @@ group: manual
6
6
---
7
7
{% include JB/setup %}
8
8
9
-
## Flink interpreter for Apache Zeppelin
9
+
# Flink interpreter for Apache Zeppelin
10
+
11
+
<divid="toc"></div>
12
+
13
+
## Overview
10
14
[Apache Flink](https://flink.apache.org) is an open source platform for distributed stream and batch data processing. Flink’s core is a streaming dataflow engine that provides data distribution, communication, and fault tolerance for distributed computations over data streams. Flink also builds batch processing on top of the streaming engine, overlaying native iteration support, managed memory, and program optimization.
11
15
12
16
## How to start local Flink cluster, to test the interpreter
Copy file name to clipboardExpand all lines: docs/interpreter/geode.md
+17-13Lines changed: 17 additions & 13 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,11 @@ group: manual
6
6
---
7
7
{% include JB/setup %}
8
8
9
-
## Geode/Gemfire OQL Interpreter for Apache Zeppelin
9
+
# Geode/Gemfire OQL Interpreter for Apache Zeppelin
10
+
11
+
<divid="toc"></div>
12
+
13
+
## Overview
10
14
<tableclass="table-configuration">
11
15
<tr>
12
16
<th>Name</th>
@@ -33,7 +37,7 @@ This interpreter supports the [Geode](http://geode.incubator.apache.org/) [Objec
33
37
34
38
This [Video Tutorial](https://www.youtube.com/watch?v=zvzzA9GXu3Q) illustrates some of the features provided by the `Geode Interpreter`.
35
39
36
-
###Create Interpreter
40
+
## Create Interpreter
37
41
By default Zeppelin creates one `Geode/OQL` instance. You can remove it or create more instances.
38
42
39
43
Multiple Geode instances can be created, each configured to the same or different backend Geode cluster. But over time a `Notebook` can have only one Geode interpreter instance `bound`. That means you _cannot_ connect to different Geode clusters in the same `Notebook`. This is a known Zeppelin limitation.
@@ -42,10 +46,10 @@ To create new Geode instance open the `Interpreter` section and click the `+Crea
42
46
43
47
> Note: The `Name` of the instance is used only to distinguish the instances while binding them to the `Notebook`. The `Name` is irrelevant inside the `Notebook`. In the `Notebook` you must use `%geode.oql` tag.
44
48
45
-
###Bind to Notebook
49
+
## Bind to Notebook
46
50
In the `Notebook` click on the `settings` icon in the top right corner. The select/deselect the interpreters to be bound with the `Notebook`.
47
51
48
-
###Configuration
52
+
## Configuration
49
53
You can modify the configuration of the Geode from the `Interpreter` section. The Geode interpreter expresses the following properties:
50
54
51
55
<tableclass="table-configuration">
@@ -71,12 +75,12 @@ You can modify the configuration of the Geode from the `Interpreter` section. T
71
75
</tr>
72
76
</table>
73
77
74
-
###How to use
78
+
## How to use
75
79
> *Tip 1: Use (CTRL + .) for OQL auto-completion.*
76
80
77
81
> *Tip 2: Always start the paragraphs with the full `%geode.oql` prefix tag! The short notation: `%geode` would still be able run the OQL queries but the syntax highlighting and the auto-completions will be disabled.*
78
82
79
-
####Create / Destroy Regions
83
+
### Create / Destroy Regions
80
84
The OQL specification does not support [Geode Regions](https://cwiki.apache.org/confluence/display/GEODE/Index#Index-MainConceptsandComponents) mutation operations. To `create`/`destroy` regions one should use the [GFSH](http://geode-docs.cfapps.io/docs/tools_modules/gfsh/chapter_overview.html) shell tool instead. In the following it is assumed that the GFSH is colocated with Zeppelin server.
81
85
82
86
```bash
@@ -97,7 +101,7 @@ EOF
97
101
98
102
Above snippet re-creates two regions: `regionEmployee` and `regionCompany`. Note that you have to explicitly specify the locator host and port. The values should match those you have used in the Geode Interpreter configuration. Comprehensive list of [GFSH Commands by Functional Area](http://geode-docs.cfapps.io/docs/tools_modules/gfsh/gfsh_quick_reference.html).
99
103
100
-
####Basic OQL
104
+
### Basic OQL
101
105
```sql
102
106
%geode.oql
103
107
SELECTcount(*) FROM/regionEmployee
@@ -136,7 +140,7 @@ SELECT e.key, e.value FROM /regionEmployee.entrySet e
136
140
137
141
> Note: You can have multiple queries in the same paragraph but only the result from the first is displayed. [[1](https://issues.apache.org/jira/browse/ZEPPELIN-178)], [[2](https://issues.apache.org/jira/browse/ZEPPELIN-212)].
138
142
139
-
####GFSH Commands From The Shell
143
+
### GFSH Commands From The Shell
140
144
Use the Shell Interpreter (`%sh`) to run OQL commands form the command line:
You can leverage [Zeppelin Dynamic Form](../manual/dynamicform.html) inside your OQL queries. You can use both the `text input` and `select form` parameterization features
150
154
151
155
```sql
152
156
%geode.oql
153
157
SELECT*FROM/regionEmployee e WHEREe.employeeId> ${Id}
154
158
```
155
159
156
-
#### Geode REST API
160
+
### Auto-completion
161
+
The Geode Interpreter provides a basic auto-completion functionality. On `(Ctrl+.)` it list the most relevant suggestions in a pop-up window.
162
+
163
+
## Geode REST API
157
164
To list the defined regions you can use the [Geode REST API](http://geode-docs.cfapps.io/docs/geode_rest/chapter_overview.html):
158
165
159
166
```
@@ -182,6 +189,3 @@ http://<geode server hostname>phd1.localdomain:8484/gemfire-api/v1/
182
189
http-service-port=8484
183
190
start-dev-rest-api=true
184
191
```
185
-
186
-
### Auto-completion
187
-
The Geode Interpreter provides a basic auto-completion functionality. On `(Ctrl+.)` it list the most relevant suggestions in a pop-up window.
Copy file name to clipboardExpand all lines: docs/interpreter/hbase.md
+9-3Lines changed: 9 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,16 +6,22 @@ group: manual
6
6
---
7
7
{% include JB/setup %}
8
8
9
-
## HBase Shell Interpreter for Apache Zeppelin
9
+
# HBase Shell Interpreter for Apache Zeppelin
10
+
11
+
<divid="toc"></div>
12
+
13
+
## Overview
10
14
[HBase Shell](http://hbase.apache.org/book.html#shell) is a JRuby IRB client for Apache HBase. This interpreter provides all capabilities of Apache HBase shell within Apache Zeppelin. The interpreter assumes that Apache HBase client software has been installed and it can connect to the Apache HBase cluster from the machine on where Apache Zeppelin is installed.
11
-
To get start with HBase, please see [HBase Quickstart](https://hbase.apache.org/book.html#quickstart)
15
+
To get start with HBase, please see [HBase Quickstart](https://hbase.apache.org/book.html#quickstart).
12
16
13
17
## HBase release supported
14
18
By default, Zeppelin is built against HBase 1.0.x releases. To work with HBase 1.1.x releases, use the following build command:
To work with HBase 1.2.0+, use the following build command:
20
26
21
27
```bash
@@ -94,4 +100,4 @@ And then to put data into that table
94
100
put 'test', 'row1', 'cf:a', 'value1'
95
101
```
96
102
97
-
For more information on all commands available, refer to [HBase shell commands](https://learnhbase.wordpress.com/2013/03/02/hbase-shell-commands/)
103
+
For more information on all commands available, refer to [HBase shell commands](https://learnhbase.wordpress.com/2013/03/02/hbase-shell-commands/).
Copy file name to clipboardExpand all lines: docs/interpreter/hdfs.md
+10-3Lines changed: 10 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,8 +6,11 @@ group: manual
6
6
---
7
7
{% include JB/setup %}
8
8
9
-
##HDFS File System Interpreter for Apache Zeppelin
9
+
# HDFS File System Interpreter for Apache Zeppelin
10
10
11
+
<divid="toc"></div>
12
+
13
+
## Overview
11
14
[Hadoop File System](http://hadoop.apache.org/) is a distributed, fault tolerant file system part of the hadoop project and is often used as storage for distributed processing engines like [Hadoop MapReduce](http://hadoop.apache.org/) and [Apache Spark](http://spark.apache.org/) or underlying file systems like [Alluxio](http://www.alluxio.org/).
12
15
13
16
## Configuration
@@ -44,13 +47,17 @@ It supports the basic shell file commands applied to HDFS, it currently only sup
44
47
45
48
> **Tip :** Use ( Ctrl + . ) for autocompletion.
46
49
47
-
###Create Interpreter
50
+
## Create Interpreter
48
51
49
52
In a notebook, to enable the **HDFS** interpreter, click the **Gear** icon and select **HDFS**.
50
53
51
54
52
-
####WebHDFS REST API
55
+
## WebHDFS REST API
53
56
You can confirm that you're able to access the WebHDFS API by running a curl command against the WebHDFS end point provided to the interpreter.
Copy file name to clipboardExpand all lines: docs/interpreter/hive.md
+7-3Lines changed: 7 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,8 +6,9 @@ group: manual
6
6
---
7
7
{% include JB/setup %}
8
8
9
-
## Hive Interpreter for Apache Zeppelin
10
-
The [Apache Hive](https://hive.apache.org/) ™ data warehouse software facilitates querying and managing large datasets residing in distributed storage. Hive provides a mechanism to project structure onto this data and query the data using a SQL-like language called HiveQL. At the same time this language also allows traditional map/reduce programmers to plug in their custom mappers and reducers when it is inconvenient or inefficient to express this logic in HiveQL.
9
+
# Hive Interpreter for Apache Zeppelin
10
+
11
+
<divid="toc"></div>
11
12
12
13
## Important Notice
13
14
Hive Interpreter will be deprecated and merged into JDBC Interpreter. You can use Hive Interpreter by using JDBC Interpreter with same functionality. See the example below of settings and dependencies.
@@ -52,7 +53,6 @@ Hive Interpreter will be deprecated and merged into JDBC Interpreter. You can us
52
53
</tr>
53
54
</table>
54
55
55
-
----
56
56
57
57
### Configuration
58
58
<tableclass="table-configuration">
@@ -115,6 +115,10 @@ Hive Interpreter will be deprecated and merged into JDBC Interpreter. You can us
115
115
116
116
This interpreter provides multiple configuration with `${prefix}`. User can set a multiple connection properties by this prefix. It can be used like `%hive(${prefix})`.
117
117
118
+
## Overview
119
+
120
+
The [Apache Hive](https://hive.apache.org/) ™ data warehouse software facilitates querying and managing large datasets residing in distributed storage. Hive provides a mechanism to project structure onto this data and query the data using a SQL-like language called HiveQL. At the same time this language also allows traditional map/reduce programmers to plug in their custom mappers and reducers when it is inconvenient or inefficient to express this logic in HiveQL.
Copy file name to clipboardExpand all lines: docs/interpreter/ignite.md
+8-6Lines changed: 8 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,16 +6,18 @@ group: manual
6
6
---
7
7
{% include JB/setup %}
8
8
9
-
##Ignite Interpreter for Apache Zeppelin
9
+
# Ignite Interpreter for Apache Zeppelin
10
10
11
-
### Overview
11
+
<divid="toc"></div>
12
+
13
+
## Overview
12
14
[Apache Ignite](https://ignite.apache.org/) In-Memory Data Fabric is a high-performance, integrated and distributed in-memory platform for computing and transacting on large-scale data sets in real-time, orders of magnitude faster than possible with traditional disk-based or flash technologies.
You can use Zeppelin to retrieve distributed data from cache using Ignite SQL interpreter. Moreover, Ignite interpreter allows you to execute any Scala code in cases when SQL doesn't fit to your requirements. For example, you can populate data into your caches or execute distributed computations.
17
19
18
-
###Installing and Running Ignite example
20
+
## Installing and Running Ignite example
19
21
In order to use Ignite interpreters, you may install Apache Ignite in some simple steps:
20
22
21
23
1. Download Ignite [source release](https://ignite.apache.org/download.html#sources) or [binary release](https://ignite.apache.org/download.html#binaries) whatever you want. But you must download Ignite as the same version of Zeppelin's. If it is not, you can't use scala code on Zeppelin. You can find ignite version in Zeppelin at the pom.xml which is placed under `path/to/your-Zeppelin/ignite/pom.xml` ( Of course, in Zeppelin source release ). Please check `ignite.version` .<br>Currently, Zeppelin provides ignite only in Zeppelin source release. So, if you download Zeppelin binary release( `zeppelin-0.5.0-incubating-bin-spark-xxx-hadoop-xx` ), you can not use ignite interpreter on Zeppelin. We are planning to include ignite in a future binary release.
@@ -31,7 +33,7 @@ In order to use Ignite interpreters, you may install Apache Ignite in some simpl
31
33
$ nohup java -jar </path/to/your Jar file name>
32
34
```
33
35
34
-
###Configuring Ignite Interpreter
36
+
## Configuring Ignite Interpreter
35
37
At the "Interpreters" menu, you may edit Ignite interpreter or create new one. Zeppelin provides these properties for Ignite.
36
38
37
39
<tableclass="table-configuration">
@@ -69,14 +71,14 @@ At the "Interpreters" menu, you may edit Ignite interpreter or create new one. Z
69
71
70
72

71
73
72
-
### Interpreter Binding for Zeppelin Notebook
74
+
##How to use
73
75
After configuring Ignite interpreter, create your own notebook. Then you can bind interpreters like below image.
For more interpreter binding information see [here](http://zeppelin.apache.org/docs/manual/interpreters.html).
78
80
79
-
### How to use Ignite SQL interpreter
81
+
### Ignite SQL interpreter
80
82
In order to execute SQL query, use ` %ignite.ignitesql ` prefix. <br>
81
83
Supposing you are running `org.apache.ignite.examples.streaming.wordcount.StreamWords`, then you can use "words" cache( Of course you have to specify this cache name to the Ignite interpreter setting section `ignite.jdbc.url` of Zeppelin ).
82
84
For example, you can select top 10 words in the words cache using the following query
0 commit comments