Skip to content

Commit 8f5e847

Browse files
committed
Fix markdown syntax issues that maruku flags, even though we use kramdown (but only those that do not affect kramdown's output)
1 parent 99966a9 commit 8f5e847

File tree

8 files changed

+15
-14
lines changed

8 files changed

+15
-14
lines changed

docs/README.md

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -14,9 +14,10 @@ The markdown code can be compiled to HTML using the
1414
[Jekyll tool](http://jekyllrb.com).
1515
To use the `jekyll` command, you will need to have Jekyll installed.
1616
The easiest way to do this is via a Ruby Gem, see the
17-
[jekyll installation instructions](http://jekyllrb.com/docs/installation).
18-
Compiling the site with Jekyll will create a directory called
19-
_site containing index.html as well as the rest of the compiled files.
17+
[jekyll installation instructions](http://jekyllrb.com/docs/installation).
18+
If not already installed, you need to install `kramdown` with `sudo gem install kramdown`.
19+
Execute `jekyll` from the `docs/` directory. Compiling the site with Jekyll will create a directory called
20+
`_site` containing index.html as well as the rest of the compiled files.
2021

2122
You can modify the default Jekyll build as follows:
2223

@@ -44,6 +45,6 @@ You can build just the Spark scaladoc by running `sbt/sbt doc` from the SPARK_PR
4445

4546
Similarly, you can build just the PySpark epydoc by running `epydoc --config epydoc.conf` from the SPARK_PROJECT_ROOT/pyspark directory. Documentation is only generated for classes that are listed as public in `__init__.py`.
4647

47-
When you run `jekyll` in the docs directory, it will also copy over the scaladoc for the various Spark subprojects into the docs directory (and then also into the _site directory). We use a jekyll plugin to run `sbt/sbt doc` before building the site so if you haven't run it (recently) it may take some time as it generates all of the scaladoc. The jekyll plugin also generates the PySpark docs using [epydoc](http://epydoc.sourceforge.net/).
48+
When you run `jekyll` in the `docs` directory, it will also copy over the scaladoc for the various Spark subprojects into the `docs` directory (and then also into the `_site` directory). We use a jekyll plugin to run `sbt/sbt doc` before building the site so if you haven't run it (recently) it may take some time as it generates all of the scaladoc. The jekyll plugin also generates the PySpark docs using [epydoc](http://epydoc.sourceforge.net/).
4849

4950
NOTE: To skip the step of building and copying over the Scala and Python API docs, run `SKIP_API=1 jekyll`.

docs/cluster-overview.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -181,7 +181,7 @@ The following table summarizes terms you'll see used to refer to cluster concept
181181
<td>Distinguishes where the driver process runs. In "cluster" mode, the framework launches
182182
the driver inside of the cluster. In "client" mode, the submitter launches the driver
183183
outside of the cluster.</td>
184-
<tr>
184+
</tr>
185185
<tr>
186186
<td>Worker node</td>
187187
<td>Any node that can run application code in the cluster</td>

docs/configuration.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -318,7 +318,7 @@ Apart from these, the following properties are also available, and may be useful
318318
When serializing using org.apache.spark.serializer.JavaSerializer, the serializer caches
319319
objects to prevent writing redundant data, however that stops garbage collection of those
320320
objects. By calling 'reset' you flush that info from the serializer, and allow old
321-
objects to be collected. To turn off this periodic reset set it to a value of <= 0.
321+
objects to be collected. To turn off this periodic reset set it to a value &lt;= 0.
322322
By default it will reset the serializer every 10,000 objects.
323323
</td>
324324
</tr>

docs/mllib-decision-tree.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -95,7 +95,7 @@ The recursive tree construction is stopped at a node when one of the two conditi
9595

9696
### Practical limitations
9797

98-
1. The tree implementation stores an Array[Double] of size *O(#features \* #splits \* 2^maxDepth)*
98+
1. The tree implementation stores an `Array[Double]` of size *O(#features \* #splits \* 2^maxDepth)*
9999
in memory for aggregating histograms over partitions. The current implementation might not scale
100100
to very deep trees since the memory requirement grows exponentially with tree depth.
101101
2. The implemented algorithm reads both sparse and dense data. However, it is not optimized for

docs/mllib-linear-methods.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -63,7 +63,7 @@ methods MLlib supports:
6363
<tbody>
6464
<tr>
6565
<td>hinge loss</td><td>$\max \{0, 1-y \wv^T \x \}, \quad y \in \{-1, +1\}$</td>
66-
<td>$\begin{cases}-y \cdot \x & \text{if $y \wv^T \x <1$}, \\ 0 &
66+
<td>$\begin{cases}-y \cdot \x &amp; \text{if $y \wv^T \x &lt;1$}, \\ 0 &amp;
6767
\text{otherwise}.\end{cases}$</td>
6868
</tr>
6969
<tr>

docs/mllib-naive-bayes.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -109,7 +109,7 @@ smoothing parameter `lambda` as input, and output a
109109
[NaiveBayesModel](api/pyspark/pyspark.mllib.classification.NaiveBayesModel-class.html), which can be
110110
used for evaluation and prediction.
111111

112-
<!--- TODO: Make Python's example consistent with Scala's and Java's. --->
112+
<!-- TODO: Make Python's example consistent with Scala's and Java's. -->
113113
{% highlight python %}
114114
from pyspark.mllib.regression import LabeledPoint
115115
from pyspark.mllib.classification import NaiveBayes

docs/scala-programming-guide.md

Lines changed: 4 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -48,12 +48,12 @@ how to access a cluster. To create a `SparkContext` you first need to build a `S
4848
that contains information about your application.
4949

5050
{% highlight scala %}
51-
val conf = new SparkConf().setAppName(<app name>).setMaster(<master>)
51+
val conf = new SparkConf().setAppName(appName).setMaster(master)
5252
new SparkContext(conf)
5353
{% endhighlight %}
5454

55-
The `<master>` parameter is a string specifying a [Spark, Mesos or YARN cluster URL](#master-urls)
56-
to connect to, or a special "local" string to run in local mode, as described below. `<app name>` is
55+
The `master` parameter is a string specifying a [Spark, Mesos or YARN cluster URL](#master-urls)
56+
to connect to, or a special "local" string to run in local mode, as described below. `appName` is
5757
a name for your application, which will be shown in the cluster web UI. It's also possible to set
5858
these variables [using a configuration file](cluster-overview.html#loading-configurations-from-a-file)
5959
which avoids hard-coding the master name in your application.
@@ -81,9 +81,8 @@ The master URL passed to Spark can be in one of the following formats:
8181
<table class="table">
8282
<tr><th>Master URL</th><th>Meaning</th></tr>
8383
<tr><td> local </td><td> Run Spark locally with one worker thread (i.e. no parallelism at all). </td></tr>
84-
<tr><td> local[K] </td><td> Run Spark locally with K worker threads (ideally, set this to the number of cores on your machine).
84+
<tr><td> local[K] </td><td> Run Spark locally with K worker threads (ideally, set this to the number of cores on your machine). </td></tr>
8585
<tr><td> local[*] </td><td> Run Spark locally with as many worker threads as logical cores on your machine.</td></tr>
86-
</td></tr>
8786
<tr><td> spark://HOST:PORT </td><td> Connect to the given <a href="spark-standalone.html">Spark standalone
8887
cluster</a> master. The port must be whichever one your master is configured to use, which is 7077 by default.
8988
</td></tr>

docs/sql-programming-guide.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -416,3 +416,4 @@ results = hiveCtx.hql("FROM src SELECT key, value").collect()
416416
{% endhighlight %}
417417

418418
</div>
419+
</div>

0 commit comments

Comments
 (0)