Category: Spark

Spark Development


Real World Spark 2 – Jupyter Scala Spark Core

By Toyin Akin,

Real World Spark 2 - Jupyter Scala Spark Core

Build a Vagrant Jupyter Scala Environment and Code/Monitor against Spark 2 Core. The modern cluster computation engine.

Course Access


You can access all the Big Data / Spark courses for one low monthly fee. Currently the membership site houses courses that covers deploying Hadoop with Cloudera and Hortonworks as well as installing and working with Spark 2.0.

This course can be purchased from


Note : This course is built on top of the “Real World Vagrant – Build an Apache Spark Development Env! – Toyin Akin” course. So if you do not have a Spark environment already installed (within a VM or directly installed), you can take the stated course above.

Jupyter Notebook is a system similar to Mathematica that allows you to create “executable documents”. Notebooks integrate formatted text (Markdown), executable code (Scala),

The Jupyter Notebook is a web application that allows you to create and share documents that contain live code, equations, visualizations and explanatory text. Uses include: data cleaning and transformation, numerical simulation, statistical modeling, machine learning and much more.

Big data integration

Leverage big data tools, such as Apache Spark, from Scala

The Jupyter Notebook is based on a set of open standards for interactive computing. Think HTML and CSS for interactive computing on the web. These open standards can be leveraged by third party developers to build customized applications with embedded interactive computing.

Spark Monitoring and Instrumentation

While creating RDDs, performing transformations and executing actions, you will be working heavily within the monitoring view of the Web UI.

Every SparkContext launches a web UI, by default on port 4040, that displays useful information about the application. This includes:

A list of scheduler stages and tasks A summary of RDD sizes and memory usage Environmental information. Information about the running executors

Why Apache Spark …

Apache Spark run programs up to 100x faster than Hadoop MapReduce in memory, or 10x faster on disk. Apache Spark has an advanced DAG execution engine that supports cyclic data flow and in-memory computing. Apache Spark offers over 80 high-level operators that make it easy to build parallel apps. And you can use it interactively from the Scala, Python and R shells. Apache Spark can combine SQL, streaming, and complex analytics.

Apache Spark powers a stack of libraries including SQL and DataFrames, MLlib for machine learning, GraphX, and Spark Streaming. You can combine these libraries seamlessly in the same application.


Recommended Spark course path. If you already have spark installed, you do not need to access the first three courses

Spark.Courses

Real World Spark 2 – Jupyter Python Spark Core

By Toyin Akin,

Real World Spark 2 - Jupyter Python Spark Core

Build a Vagrant Python Jupyter Environment and Code/Monitor against Spark 2 Core. The modern cluster computation engine.

Course Access


You can access all the Big Data / Spark courses for one low monthly fee. Currently the membership site houses courses that covers deploying Hadoop with Cloudera and Hortonworks as well as installing and working with Spark 2.0.

This course can be purchased from


Note : This course is built on top of the “Real World Vagrant – Build an Apache Spark Development Env! – Toyin Akin” course. So if you do not have a Spark environment already installed (within a VM or directly installed), you can take the stated course above.

Jupyter Notebook is a system similar to Mathematica that allows you to create “executable documents”. Notebooks integrate formatted text (Markdown), executable code (Python), mathematical formulas (LaTeX), and graphics and visualizations (matplotlib) into a single document that captures the flow of an exploration and can be exported as a formatted report or an executable script.,

The Jupyter Notebook is a web application that allows you to create and share documents that contain live code, equations, visualizations and explanatory text. Uses include: data cleaning and transformation, numerical simulation, statistical modeling, machine learning and much more.

Big data integration

Leverage big data tools, such as Apache Spark, from Python

The Jupyter Notebook is based on a set of open standards for interactive computing. Think HTML and CSS for interactive computing on the web. These open standards can be leveraged by third party developers to build customized applications with embedded interactive computing.

Spark Monitoring and Instrumentation

While creating RDDs, performing transformations and executing actions, you will be working heavily within the monitoring view of the Web UI.

Every SparkContext launches a web UI, by default on port 4040, that displays useful information about the application. This includes:

A list of scheduler stages and tasks A summary of RDD sizes and memory usage Environmental information. Information about the running executors

Why Apache Spark …

Apache Spark run programs up to 100x faster than Hadoop MapReduce in memory, or 10x faster on disk. Apache Spark has an advanced DAG execution engine that supports cyclic data flow and in-memory computing. Apache Spark offers over 80 high-level operators that make it easy to build parallel apps. And you can use it interactively from the Scala, Python and R shells. Apache Spark can combine SQL, streaming, and complex analytics.

Apache Spark powers a stack of libraries including SQL and DataFrames, MLlib for machine learning, GraphX, and Spark Streaming. You can combine these libraries seamlessly in the same application.


Recommended Spark course path. If you already have spark installed, you do not need to access the first three courses

Spark.Courses

Real World Spark 2 – Interactive Scala spark-shell Core

By Toyin Akin,

Real World Spark 2 - Interactive Scala spark-shell Core

Build a Vagrant Scala spark-shell cluster and Code/Monitor against Spark 2 Core. The modern cluster computation engine.

Course Access


You can access all the Big Data / Spark courses for one low monthly fee. Currently the membership site houses courses that covers deploying Hadoop with Cloudera and Hortonworks as well as installing and working with Spark 2.0.

This course can be purchased from


Note : This course is built on top of the “Real World Vagrant – Build an Apache Spark Development Env! – Toyin Akin” course. So if you do not have a Spark environment already installed (within a VM or directly installed), you can take the stated course above.

Spark’s shell provides a simple way to learn the API, as well as a powerful tool to analyze data interactively. It is available in Scala (which runs on the Java VM and is thus a good way to use existing Java libraries). Start it by running the following anywhere within a bash terminal within the built Virtual Machine

spark-shell

Spark’s primary abstraction is a distributed collection of items called a Resilient Distributed Dataset (RDD). RDDs can be created from collections, Hadoop InputFormats (such as HDFS files) or by transforming other RDDs

Spark Monitoring and Instrumentation

While creating RDDs, performing transformations and executing actions, you will be working heavily within the monitoring view of the Web UI.

Every SparkContext launches a web UI, by default on port 4040, that displays useful information about the application. This includes:

A list of scheduler stages and tasks A summary of RDD sizes and memory usage Environmental information. Information about the running executors

Why Apache Spark …

Apache Spark run programs up to 100x faster than Hadoop MapReduce in memory, or 10x faster on disk. Apache Spark has an advanced DAG execution engine that supports cyclic data flow and in-memory computing. Apache Spark offers over 80 high-level operators that make it easy to build parallel apps. And you can use it interactively from the Scala, Python and R shells. Apache Spark can combine SQL, streaming, and complex analytics.

Apache Spark powers a stack of libraries including SQL and DataFrames, MLlib for machine learning, GraphX, and Spark Streaming. You can combine these libraries seamlessly in the same application.


Recommended Spark course path. If you already have spark installed, you do not need to access the first three courses

Spark.Courses

Real World Spark 2 – Interactive Python pyspark Core

By Toyin Akin,

Real World Spark 2 - Interactive Python pyspark Core

Build a Vagrant Python pyspark cluster and Code/Monitor against Spark 2 Core. The modern cluster computation engine.

Course Access


You can access all the Big Data / Spark courses for one low monthly fee. Currently the membership site houses courses that covers deploying Hadoop with Cloudera and Hortonworks as well as installing and working with Spark 2.0.

This course can be purchased from


Note : This course is built on top of the “Real World Vagrant – Build an Apache Spark Development Env! – Toyin Akin” course. So if you do not have a Spark environment already installed (within a VM or directly installed), you can take the stated course above.

Spark’s python shell provides a simple way to learn the API, as well as a powerful tool to analyze data interactively. It is available in Python. Start it by running the following anywhere within a bash terminal within the built Virtual Machine

pyspark

Spark’s primary abstraction is a distributed collection of items called a Resilient Distributed Dataset (RDD). RDDs can be created from collections, Hadoop InputFormats (such as HDFS files) or by transforming other RDDs

Spark Monitoring and Instrumentation

While creating RDDs, performing transformations and executing actions, you will be working heavily within the monitoring view of the Web UI.

Every SparkContext launches a web UI, by default on port 4040, that displays useful information about the application. This includes:

A list of scheduler stages and tasks A summary of RDD sizes and memory usage Environmental information. Information about the running executors

Why Apache Spark …

Apache Spark run programs up to 100x faster than Hadoop MapReduce in memory, or 10x faster on disk. Apache Spark has an advanced DAG execution engine that supports cyclic data flow and in-memory computing. Apache Spark offers over 80 high-level operators that make it easy to build parallel apps. And you can use it interactively from the Scala, Python and R shells. Apache Spark can combine SQL, streaming, and complex analytics.

Apache Spark powers a stack of libraries including SQL and DataFrames, MLlib for machine learning, GraphX, and Spark Streaming. You can combine these libraries seamlessly in the same application.


Recommended Spark course path. If you already have spark installed, you do not need to access the first three courses

Spark.Courses

Real World Spark 2 – ScalaIDE Spark Core 2 Developer

By Toyin Akin,

Real World Spark 2 - ScalaIDE Spark Core 2 Developer

Build a Vagrant box, walk through Spark 2 Core Code via sbt and ScalaIDE. The modern cluster computation engine.

Course Access


You can access all the Big Data / Spark courses for one low monthly fee. Currently the membership site houses courses that covers deploying Hadoop with Cloudera and Hortonworks as well as installing and working with Spark 2.0.

This course can be purchased from


Note : This course is built on top of the “Real World Vagrant – Build an Apache Spark Development Env! – Toyin Akin” course. So if you do not have a Spark + ScalaIDE environment already installed (within a VM or directly installed), you can take the stated course above.

Scala IDE provides advanced editing and debugging support for the development of pure Scala and mixed Scala-Java applications.

Now with a shiny Scala debugger, semantic highlight, more reliable JUnit test finder, an ecosystem of related plugins, and much more.

Scala Debugger. Stepping through closures and Scala-aware display of debugging information.

Spark Monitoring and Instrumentation

While creating RDDs, performing transformations and executing actions, you will be working heavily within the monitoring view of the Web UI.

Every SparkContext launches a web UI, by default on port 4040, that displays useful information about the application. This includes:

A list of scheduler stages and tasks
A summary of RDD sizes and memory usage
Environmental information.
Information about the running executors

Why Apache Spark …

Apache Spark run programs up to 100x faster than Hadoop MapReduce in memory, or 10x faster on disk. Apache Spark has an advanced DAG execution engine that supports cyclic data flow and in-memory computing. Apache Spark offers over 80 high-level operators that make it easy to build parallel apps. And you can use it interactively from the Scala, Python and R shells. Apache Spark can combine SQL, streaming, and complex analytics.

Apache Spark powers a stack of libraries including SQL and DataFrames, MLlib for machine learning, GraphX, and Spark Streaming. You can combine these libraries seamlessly in the same application.


Recommended Spark course path. If you already have spark installed, you do not need to access the first three courses

Spark.Courses