Fast Data Processing with Spark 2
Description:
Key Features
- A quick way to get started with Spark – and reap the rewards
- From analytics to engineering your big data architecture, we’ve got it covered
- Bring your Scala and Java knowledge – and put it to work on new and exciting problems
When people want a way to process big data at speed, Spark is invariably the solution. With its ease of development (in comparison to the relative complexity of Hadoop), it’s unsurprising that it’s becoming popular with data analysts and engineers everywhere.
Beginning with the fundamentals, we’ll show you how to get set up with Spark with minimum fuss. You’ll then get to grips with some simple APIs before investigating machine learning and graph processing – throughout we’ll make sure you know exactly how to apply your knowledge.
You will also learn how to use the Spark shell, how to load data before finding out how to build and run your own Spark applications. Discover how to manipulate your RDD and get stuck into a range of DataFrame APIs. As if that’s not enough, you’ll also learn some useful Machine Learning algorithms with the help of Spark MLlib and integrating Spark with R. We’ll also make sure you’re confident and prepared for graph processing, as you learn more about the GraphX API.
What you will learn- Install and set up Spark in your cluster
- Prototype distributed applications with Spark's interactive shell
- Perform data wrangling using the new DataFrame APIs
- Get to know the different ways to interact with Spark's distributed representation of data (RDDs)
- Query Spark with a SQL-like query syntax
- See how Spark works with big data
- Implement machine learning systems with highly scalable algorithms
- Use R, the popular statistical language, to work with Spark
- Apply interesting graph algorithms and graph processing with GraphX
Krishna Sankar is a Senior Specialist—AI Data Scientist with Volvo Cars focusing on Autonomous Vehicles. His earlier stints include Chief Data Scientist at http://cadenttech.tv/, Principal Architect/Data Scientist at Tata America Intl. Corp., Director of Data Science at a bioinformatics startup, and as a Distinguished Engineer at Cisco. He has been speaking at various conferences including ML tutorials at Strata SJC and London 2016, Spark Summit [goo.gl/ab30lD], Strata-Spark Camp, OSCON, PyCon, and PyData, writes about Robots Rules of Order [goo.gl/5yyRv6], Big Data Analytics—Best of the Worst [goo.gl/ImWCaz], predicting NFL, Spark [http://goo.gl/E4kqMD], Data Science [http://goo.gl/9pyJMH], Machine Learning [http://goo.gl/SXF53n], Social Media Analysis [http://goo.gl/D9YpVQ] as well as has been a guest lecturer at the Naval Postgraduate School. His occasional blogs can be found at https://doubleclix.wordpress.com/. His other passion is flying drones (working towards Drone Pilot License (FAA UAS Pilot) and Lego Robotics—you will find him at the St.Louis FLL World Competition as Robots Design Judge.
Table of Contents- Installing Spark and Setting Up Your Cluster
- Using the Spark Shell
- Building and Running a Spark Application
- Creating a SparkSession Object
- Loading and Saving Data in Spark
- Manipulating Your RDD
- Spark 2.0 Concepts
- Spark SQL
- Foundations of Datasets/DataFrames – The Proverbial Workhorse for DataScientists
- Spark with Big Data
- Machine Learning with Spark ML Pipelines
- GraphX
Low Price Summary
Top Bookstores
DISCLOSURE: We're an eBay Partner Network affiliate and we earn commissions from purchases you make on eBay via one of the links above.
Want a Better Price Offer?
Set a price alert and get notified when the book starts selling at your price.
Want to Report a Pricing Issue?
Let us know about the pricing issue you've noticed so that we can fix it.