1) What are the various levels of persistence in Apache Spark?
Apache Spark automatically persists the intermediary data from various shuffle operations, however it is often suggested that users call persist () method on the RDD in case they plan to reuse it. Spark has various persistence levels to store the RDDs on disk or in memory or as a combination of both with different replication levels.
2) What is Shark?
Most of the data users know only SQL and are not good at programming. Shark is a tool, developed for people who are from a database background – to access Scala MLib capabilities through Hive like SQL interface. Shark tool helps data users run Hive on Spark – offering compatibility with Hive metastore, queries and data.
3) List some use cases where Spark outperforms Hadoop in processing.
4) What is a Sparse Vector?
A sparse vector has two parallel arrays –one for indices and the other for values. These vectors are used for storing non-zero entries to save space.
5) What is RDD?
RDDs (Resilient Distributed Datasets) are basic abstraction in Apache Spark that represent the data coming into the system in object format. RDDs are used for in-memory computations on large clusters, in a fault tolerant manner. RDDs are read-only portioned, collection of records, that are –
6) Explain about transformations and actions in the context of RDDs.
Transformations are functions executed on demand, to produce a new RDD. All transformations are followed by actions. Some examples of transformations include map, filter and reduceByKey.
Actions are the results of RDD computations or transformations. After an action is performed, the data from RDD moves back to the local machine. Some examples of actions include reduce, collect, first, and take.
7) What are the languages supported by Apache Spark for developing big data applications?
Scala, Java, Python, R and Clojure
8) Can you use Spark to access and analyse data stored in Cassandra databases?
Yes, it is possible if you use Spark Cassandra Connector.
9) Is it possible to run Apache Spark on Apache Mesos?
Yes, Apache Spark can be run on the hardware clusters managed by Mesos.
10) Explain about the different cluster managers in Apache Spark
The 3 different clusters managers supported in Apache Spark are:
11) How can Spark be connected to Apache Mesos?
To connect Spark with Mesos-
12) How can you minimize data transfers when working with Spark?
Minimizing data transfers and avoiding shuffling helps write spark programs that run in a fast and reliable manner. The various ways in which data transfers can be minimized when working with Apache Spark are:
13) Why is there a need for broadcast variables when working with Apache Spark?
These are read only variables, present in-memory cache on every machine. When working with Spark, usage of broadcast variables eliminates the necessity to ship copies of a variable for every task, so data can be processed faster. Broadcast variables help in storing a lookup table inside the memory which enhances the retrieval efficiency when compared to an RDD lookup ().
14) Is it possible to run Spark and Mesos along with Hadoop?
Yes, it is possible to run Spark and Mesos with Hadoop by launching each of these as a separate service on the machines. Mesos acts as a unified scheduler that assigns tasks to either Spark or Hadoop.
15) What is lineage graph?
The RDDs in Spark, depend on one or more other RDDs. The representation of dependencies in between RDDs is known as the lineage graph. Lineage graph information is used to compute each RDD on demand, so that whenever a part of persistent RDD is lost, the data that is lost can be recovered using the lineage graph information.
16) How can you trigger automatic clean-ups in Spark to handle accumulated metadata?
You can trigger the clean-ups by setting the parameter ‘spark.cleaner.ttl’ or by dividing the long running jobs into different batches and writing the intermediary results to the disk.
17) Explain about the major libraries that constitute the Spark Ecosystem
18) What are the benefits of using Spark with Apache Mesos?
It renders scalable partitioning among various Spark instances and dynamic partitioning between Spark and other big data frameworks.
19) What is the significance of Sliding Window operation?
Sliding Window controls transmission of data packets between various computer networks. Spark Streaming library provides windowed computations where the transformations on RDDs are applied over a sliding window of data. Whenever the window slides, the RDDs that fall within the particular window are combined and operated upon to produce new RDDs of the windowed DStream.
20) What is a DStream?
Discretized Stream is a sequence of Resilient Distributed Databases that represent a stream of data. DStreams can be created from various sources like Apache Kafka, HDFS, and Apache Flume. DStreams have two operations –
21) When running Spark applications, is it necessary to install Spark on all the nodes of YARN cluster?
Spark need not be installed when running a job under YARN or Mesos because Spark can execute on top of YARN or Mesos clusters without affecting any change to the cluster.
22) What is Catalyst framework?
Catalyst framework is a new optimization framework present in Spark SQL. It allows Spark to automatically transform SQL queries by adding new optimizations to build a faster processing system.
23) Name a few companies that use Apache Spark in production.
Pinterest, Conviva, Shopify, Open Table
24) Which spark library allows reliable file sharing at memory speed across different cluster frameworks?
Tachyon
BlinkDB is a query engine for executing interactive SQL queries on huge volumes of data and renders query results marked with meaningful error bars. BlinkDB helps users balance ‘query accuracy’ with response time.
26) How can you compare Hadoop and Spark in terms of ease of use?
Hadoop MapReduce requires programming in Java which is difficult, though Pig and Hive make it considerably easier. Learning Pig and Hive syntax takes time. Spark has interactive APIs for different languages like Java, Python or Scala and also includes Shark i.e. Spark SQL for SQL lovers – making it comparatively easier to use than Hadoop.
27) What are the common mistakes developers make when running Spark applications?
Developers often make the mistake of-
Developers need to be careful with this, as Spark makes use of memory for processing.
28) What is the advantage of a Parquet file?
Parquet file is a columnar format file that helps –
29) What are the various data sources available in SparkSQL?
30) How Spark uses Hadoop?
Spark has its own cluster management computation and mainly uses Hadoop for storage.
31) What are the key features of Apache Spark that you like?
32) What do you understand by Pair RDD?
Special operations can be performed on RDDs in Spark using key/value pairs and such RDDs are referred to as Pair RDDs. Pair RDDs allow users to access each key in parallel. They have a reduceByKey () method that collects data based on each key and a join () method that combines different RDDs together, based on the elements having the same key.
33) Which one will you choose for a project –Hadoop MapReduce or Apache Spark?
The answer to this question depends on the given project scenario – as it is known that Spark makes use of memory instead of network and disk I/O. However, Spark uses large amount of RAM and requires dedicated machine to produce effective results. So the decision to use Hadoop or Spark varies dynamically with the requirements of the project and budget of the organization.
34) Explain about the different types of transformations on DStreams?
35) Explain about the popular use cases of Apache Spark
Apache Spark is mainly used for
36) Is Apache Spark a good fit for Reinforcement learning?
No. Apache Spark works well only for simple machine learning algorithms like clustering, regression, classification.
37) What is Spark Core?
It has all the basic functionalities of Spark, like – memory management, fault recovery, interacting with storage systems, scheduling tasks, etc.
38) How can you remove the elements with a key present in any other RDD?
Use the subtractByKey () function
39) What is the difference between persist() and cache()
persist () allows the user to specify the storage level whereas cache () uses the default storage level.
40) Explain what is Scala?
Scala is an object functional programming and scripting language for general software applications designed to express solutions in a concise manner.
41) What is a ‘Scala set’? What are methods through which operation sets are expressed?
Scala set is a collection of pairwise elements of the same type. Scala set does not contain any duplicate elements. There are two kinds of sets, mutable and immutable.
42) What is a ‘Scala map’?
Scala map is a collection of key or value pairs. Based on its key any value can be retrieved. Values are not unique but keys are unique in the Map.
43) What is the advantage of Scala?
a) Less error prone functional style
b) High maintainability and productivity
c) High scalability
d) High testability
e) Provides features of concurrent programming
44) In what ways Scala is better than other programming language?
a) The arrays uses regular generics, while in other language, generics are bolted on as an afterthought and are completely separate but have overlapping behaviours with arrays.
b) Scala has immutable “val” as a first class language feature. The “val” of scala is similar to Java final variables. Contents may mutate but top reference is immutable.
c) Scala lets ‘if blocks’, ‘for-yield loops’, and ‘code’ in braces to return a value. It is more preferable, and eliminates the need for a separate ternary operator.
d) Singleton has singleton objects rather than C++/Java/ C# classic static. It is a cleaner solution
e) Persistent immutable collections are the default and built into the standard library.
f) It has native tuples and a concise code
g) It has no boiler plate code
45) What are the Scala variables?
Values and variables are two shapes that come in Scala. A value variable is constant and cannot be changed once assigned. It is immutable, while a regular variable, on the other hand, is mutable, and you can change the value.
The two types of variables are
var myVar : Int=0;
val myVal: Int=1;
46) Mention the difference between an object and a class ?
A class is a definition for a description. It defines a type in terms of methods and composition of other types. A class is a blueprint of the object. While, an object is a singleton, an instance of a class which is unique. An anonymous class is created for every object in the code, it inherits from whatever classes you declared object to implement.
47) What is recursion tail in scala?
‘Recursion’ is a function that calls itself. A function that calls itself, for example, a function ‘A’ calls function ‘B’, which calls the function ‘C’. It is a technique used frequently in functional programming. In order for a tail recursive, the call back to the function must be the last function to be performed.
48) What is ‘scala trait’ in scala?
‘Traits’ are used to define object types specified by the signature of the supported methods. Scala allows to be partially implemented but traits may not have constructor parameters. A trait consists of method and field definition, by mixing them into classes it can be reused.
49) When can you use traits?
There is no specific rule when you can use traits, but there is a guideline which you can consider.
a) If the behaviour will not be reused, then make it a concrete class. Anyhow it is not a reusable behaviour.
b) In order to inherit from it in Java code, an abstract class can be used.
c) If efficiency is a priority then lean towards using a class
d) Make it a trait if it might be reused in multiple and unrelated classes. In different parts of the class hierarchy only traits can be mixed into different parts.
e) You can use an abstract class if you want to distribute it in compiled form and expects outside groups to write classes inheriting from it.
50) What are Case Classes?
Case classes provide a recursive decomposition mechanism via pattern matching, it is regular classes that export their constructor parameter. The constructor parameters of case classes can be accessed directly and are treated as public values.
51) What is the use of tuples in scala?
Scala tuples combine a fixed number of items together so that they can be passed around as a whole. A tuple is immutable and can hold objects with different types, unlike an array or list.
52) What is function currying in Scala?
Currying is the technique of transforming a function that takes multiple arguments into a function that takes a single argument Many of the same techniques as languages like Haskell and LISP are supported by Scala. Function currying is one of the least used and misunderstood ones.
53) What are implicit parameters in Scala?
The implicit parameter is the way that allows parameters of a method to be “found”. It is similar to default parameters, but it has a different mechanism for finding the “default” value. The implicit parameter is a parameter to a method or constructor that is marked as implicit. This means if a parameter value is not mentioned then the compiler will search for an “implicit” value defined within a scope.
54) What is closure in Scala?
A closure is a function whose return value depends on the value of the variables declared outside the function.
55) What is Monad in Scala?
A monad is an object that wraps another object. You pass the Monad mini-programs, i.e functions, to perform the data manipulation of the underlying object, instead of manipulating the object directly. Monad chooses how to apply the program to the underlying object.
56) What is the Scala anonymous function?
In a source code, anonymous functions are called ‘function literals’ and at run time, function literals are instantiated into objects called function values. Scala provides a relatively easy syntax for defining anonymous functions.
57) Explain ‘Scala higher order’ functions?
Scala allows the definition of higher-order functions. These are functions that take other functions as parameters, or whose result is a function. In the following example, apply () function takes another function ‘f’, and a value ‘v’, and applies a function to v.