Question # 1
Problem Scenario 28 : You need to implement near real time solutions for collecting information when submitted in file with below Data echo "IBM,100,20160104" >> /tmp/spooldir2/.bb.txt echo "IBM,103,20160105" >> /tmp/spooldir2/.bb.txt mv /tmp/spooldir2/.bb.txt /tmp/spooldir2/bb.txt After few mins echo "IBM,100.2,20160104" >> /tmp/spooldir2/.dr.txt echo "IBM,103.1,20160105" >> /tmp/spooldir2/.dr.txt mv /tmp/spooldir2/.dr.txt /tmp/spooldir2/dr.txt You have been given below directory location (if not available than create it) /tmp/spooldir2 . As soon as file committed in this directory that needs to be available in hdfs in /tmp/flume/primary as well as /tmp/flume/secondary location.However, note that/tmp/flume/secondary is optional, if transaction failed which writes in this directory need not to be rollback. Write a flume configuration file named flumeS.conf and use it to load data in hdfs with following additional properties . 1. Spool /tmp/spooldir2 directory 2. File prefix in hdfs sholuld be events 3. File suffix should be .log 4. If file is not committed and in use than it should have _ as prefix. 5. Data should be written as text to hdfs |
Answer: See the explanation for Step by Step Solution and configuration. Explanation: Solution : Step 1 : Create directory mkdir /tmp/spooldir2 Step 2 : Create flume configuration file, with below configuration for source, sink and channel and save it in flume8.conf. agent1 .sources = source1 agent1.sinks = sink1a sink1bagent1.channels = channel1a channel1b agent1.sources.source1.channels = channel1a channel1b agent1.sources.source1.selector.type = replicating agent1.sources.source1.selector.optional = channel1b agent1.sinks.sink1a.channel = channel1a agent1 .sinks.sink1b.channel = channel1b agent1.sources.source1.type = spooldir agent1 .sources.sourcel.spoolDir = /tmp/spooldir2 agent1.sinks.sink1a.type = hdfs agent1 .sinks, sink1a.hdfs. path = /tmp/flume/primary agent1 .sinks.sink1a.hdfs.tilePrefix = events agent1 .sinks.sink1a.hdfs.fileSuffix = .log agent1 .sinks.sink1a.hdfs.fileType = Data Stream agent1 .sinks.sink1b.type = hdfs agent1 .sinks.sink1b.hdfs.path = /tmp/flume/secondary agent1 .sinks.sink1b.hdfs.filePrefix = events agent1.sinks.sink1b.hdfs.fileSuffix = .log agent1 .sinks.sink1b.hdfs.fileType = Data Stream agent1.channels.channel1a.type = file agent1.channels.channel1b.type = memory step 4 : Run below command which will use this configuration file and append data in hdfs. Start flume service: flume-ng agent -conf /home/cloudera/flumeconf -conf-file /home/cloudera/flumeconf/flume8.conf -name age Step 5 : Open another terminal and create a file in /tmp/spooldir2/ echo "IBM,100,20160104" » /tmp/spooldir2/.bb.txt echo "IBM,103,20160105" » /tmp/spooldir2/.bb.txt mv /tmp/spooldir2/.bb.txt /tmp/spooldir2/bb.txt After few mins echo "IBM.100.2,20160104" »/tmp/spooldir2/.dr.txt echo "IBM,103.1,20160105" » /tmp/spooldir2/.dr.txt mv /tmp/spooldir2/.dr.txt /tmp/spooldir2/dr.txt
Question # 2
Problem Scenario 79 : You have been given MySQL DB with following details. user=retail_dba password=cloudera database=retail_db table=retail_db.orders table=retail_db.order_items jdbc URL = jdbc:mysql://quickstart:3306/retail_db Columns of products table : (product_id | product categoryid | product_name | product_description | product_prtce | product_image ) Please accomplish following activities. 1. Copy "retaildb.products" table to hdfs in a directory p93_products 2. Filter out all the empty prices 3. Sort all the products based on price in both ascending as well as descending order. 4. Sort all the products based on price as well as product_id in descending order. 5. Use the below functions to do data ordering or ranking and fetch top 10 elements top() takeOrdered() sortByKey()
|
Answer: See the explanation for Step by Step Solution and configuration. Explanation: Solution : Step 1 : Import Single table . sqoop import -connect jdbc:mysql://quickstart:3306/retail_db -username=retail_dba - password=cloudera -table=products -target-dir=p93_products -m 1 Note : Please check you dont have space between before or after '=' sign. Sqoop uses the MapReduce framework to copy data from RDBMS to hdfs Step 2 : Step 2 : Read the data from one of the partition, created using above command, hadoop fs -cat p93_products/part-m-00000 Step 3 : Load this directory as RDD using Spark and Python (Open pyspark terminal and do following). productsRDD = sc.textFile("p93_products") Step 4 : Filter empty prices, if exists #filter out empty prices lines nonemptyjines = productsRDD.filter(lambda x: len(x.split(",")[4]) > 0) Step 5 : Now sort data based on product_price in order. sortedPriceProducts=nonempty_lines.map(lambdaline:(float(line.split(",")[4]),line.split(",")[2] )).sortByKey() for line in sortedPriceProducts.collect(): print(line) Step 6 : Now sort data based on product_price in descending order. sortedPriceProducts=nonempty_lines.map(lambda line: (float(line.split(",")[4]),line.split(",")[2])).sortByKey(False) for line in sortedPriceProducts.collect(): print(line) Step 7 : Get highest price products name. sortedPriceProducts=nonemptyJines.map(lambda line : (float(line.split(",")[4]),linesplit(,,,,,)[ 2]))-sortByKey(False).take(1) print(sortedPriceProducts) Step 8 : Now sort data based on product_price as well as product_id in descending order. #Dont forget to cast string #Tuple as key ((price,id),name) sortedPriceProducts=nonemptyJines.map(lambda line : ((float(line print(sortedPriceProducts) Step 9 : Now sort data based on product_price as well as product_id in descending order, using top() function. #Dont forget to cast string #Tuple as key ((price,id),name) sortedPriceProducts=nonemptyJines.map(lambda line: ((float(line.s^^ print(sortedPriceProducts) Step 10 : Now sort data based on product_price as ascending and product_id in ascending order, using takeOrdered{) function. #Dont forget to cast string #Tuple as key ((price,id),name) sortedPriceProducts=nonemptyJines.map(lambda line: ((float(line.split(","}[4]},int(line.split(","}[0]}},line.split(","}[2]}}.takeOrdered(10, lambda tuple : (tuple[0][0],tuple[0][1])) Step 11 : Now sort data based on product_price as descending and product_id in ascending order, using takeOrdered() function. #Dont forget to cast string #Tuple as key ((price,id},name) #Using minus(-) parameter can help you to make descending ordering , only for numeric value. sortedPrlceProducts=nonemptylines.map(lambda line: ((float(line.split(","}[4]},int(line.split(","}[0]}},line.split(","}[2]}}.takeOrdered(10, lambda tuple : (-tuple[0][0],tuple[0][1]}}
Question # 3
Problem Scenario 55 : You have been given below code snippet. val pairRDDI = sc.parallelize(List( ("cat",2), ("cat", 5), ("book", 4),("cat", 12))) val pairRDD2 = sc.parallelize(List( ("cat",2), ("cup", 5), ("mouse", 4),("cat", 12))) operation1 Write a correct code snippet for operationl which will produce desired output, shown below. Array[(String, (Option[lnt], Option[lnt]))] = Array((book,(Some(4},None)), (mouse,(None,Some(4))), (cup,(None,Some(5))), (cat,(Some(2),Some(2)), (cat,(Some(2),Some(12))), (cat,(Some(5),Some(2))), (cat,(Some(5),Some(12))), (cat,(Some(12),Some(2))), (cat,(Some(12),Some(12)))J |
Answer: See the explanation for Step by Step Solution and configuration. Explanation: Solution : pairRDD1.fullOuterJoin(pairRDD2).collect fullOuterJoin [Pair] Performs the full outer join between two paired RDDs. Listing Variants def fullOuterJoin[W](other: RDD[(K, W)], numPartitions: Int): RDD[(K, (Option[V], OptionfW]))] def fullOuterJoin[W](other: RDD[(K, W}]}: RDD[(K, (Option[V], OptionfW]))] def fullOuterJoin[W](other: RDD[(K, W)], partitioner: Partitioner): RDD[(K, (Option[V], Option[W]))]
Question # 4
Problem Scenario 33 : You have given a files as below. spark5/EmployeeName.csv (id,name) spark5/EmployeeSalary.csv (id,salary) Data is given below: EmployeeName.csv E01,Lokesh E02,Bhupesh E03,Amit E04,Ratan E05,Dinesh E06,Pavan E07,Tejas E08,Sheela E09,Kumar E10,Venkat EmployeeSalary.csv E01,50000 E02,50000 E03,45000 E04,45000 E05,50000 E06,45000 E07,50000 E08,10000 E09,10000 E10,10000 Now write a Spark code in scala which will load these two tiles from hdfs and join the same, and produce the (name.salary) values. And save the data in multiple tile group by salary (Means each file will have name of employees with same salary). Make sure file name include salary as well. |
Answer: See the explanation for Step by Step Solution and configuration. Explanation: Solution : Step 1 : Create all three files in hdfs (We will do using Hue). However, you can first create in local filesystem and then upload it to hdfs. Step 2 : Load EmployeeName.csv file from hdfs and create PairRDDs val name = sc.textFile("spark5/EmployeeName.csv") val namePairRDD = name.map(x=> (x.split(",")(0),x.split('V')(1))) Step 3 : Load EmployeeSalary.csv file from hdfs and create PairRDDs val salary = sc.textFile("spark5/EmployeeSalary.csv") val salaryPairRDD = salary.map(x=> (x.split(",")(0),x.split(",")(1))) Step 4 : Join all pairRDDS val joined = namePairRDD.join(salaryPairRDD} Step 5 : Remove key from RDD and Salary as a Key. val keyRemoved = joined.values Step 6 : Now swap filtered RDD. val swapped = keyRemoved.map(item => item.swap) Step 7 : Now groupBy keys (It will generate key and value array) val grpByKey = swapped.groupByKey().collect() Step 8 : Now create RDD for values collection val rddByKey = grpByKey.map{case (k,v) => k->sc.makeRDD(v.toSeq)} Step 9 : Save the output as a Text file. rddByKey.foreach{ case (k,rdd) => rdd.saveAsTextFile("spark5/Employee"+k)}
Question # 5
Problem Scenario 90 : You have been given below two files course.txt id,course 1,Hadoop 2,Spark 3,HBase fee.txt id,fee 2,3900 3,4200 4,2900 Accomplish the following activities. 1. Select all the courses and their fees , whether fee is listed or not. 2. Select all the available fees and respective course. If course does not exists still list the fee 3. Select all the courses and their fees , whether fee is listed or not. However, ignore records having fee as null.
|
Answer: See the explanation for Step by Step Solution and configuration. Explanation: Solution : Step 1: hdfs dfs -mkdir sparksql4 hdfs dfs -put course.txt sparksql4/ hdfs dfs -put fee.txt sparksql4/ Step 2 : Now in spark shell // load the data into a new RDD val course = sc.textFile("sparksql4/course.txt") val fee = sc.textFile("sparksql4/fee.txt") // Return the first element in this RDD course.fi rst() fee.fi rst() //define the schema using a case class case class Course(id: Integer, name: String) case class Fee(id: Integer, fee: Integer) // create an RDD of Product objects val courseRDD = course.map(_.split(",")).map(c => Course(c(0).tolnt,c(1))) val feeRDD =fee.map(_.split(",")).map(c => Fee(c(0}.tolnt,c(1}.tolnt)) courseRDD.first() courseRDD.count(} feeRDD.first() feeRDD.countQ // change RDD of Product objects to a DataFrame val courseDF = courseRDD.toDF(} val feeDF = feeRDD.toDF{) // register the DataFrame as a temp table courseDF. registerTempTable("course") feeDF. registerTempTablef'fee") // Select data from table val results = sqlContext.sql(......SELECT' FROM course """ ) results. showQ val results = sqlContext.sql(......SELECT' FROM fee......) results. showQ val results = sqlContext.sql(......SELECT * FROM course LEFT JOIN fee ON course.id = fee.id......) results-showQ val results ="sqlContext.sql(......SELECT * FROM course RIGHT JOIN fee ON course.id = fee.id "MM ) results. showQ val results = sqlContext.sql(......SELECT' FROM course LEFT JOIN fee ON course.id = fee.id where fee.id IS NULL" results. show()
Question # 6
Problem Scenario 61 : You have been given below code snippet. val a = sc.parallelize(List("dog", "salmon", "salmon", "rat", "elephant"), 3) val b = a.keyBy(_.length) val c = sc.parallelize(List("dog","cat","gnu","salmon","rabbit","turkey","wolf","bear","bee"), 3) val d = c.keyBy(_.length) operationl Write a correct code snippet for operationl which will produce desired output, shown below. Array[(lnt, (String, Option[String]}}] = Array((6,(salmon,Some(salmon))), (6,(salmon,Some(rabbit))), (6,(salmon,Some(turkey))), (6,(salmon,Some(salmon))), (6,(salmon,Some(rabbit))), (6,(salmon,Some(turkey))), (3,(dog,Some(dog))), (3,(dog,Some(cat))), (3,(dog,Some(dog))), (3,(dog,Some(bee))), (3,(rat,Some(dogg)), (3,(rat,Some(cat)j), (3,(rat.Some(gnu))). (3,(rat,Some(bee))), (8,(elephant,None)))
|
Answer: See the explanation for Step by Step Solution and configuration. Explanation: Solution : b.leftOuterJoin(d}.collect leftOuterJoin [Pair]: Performs an left outer join using two key-value RDDs. Please note that the keys must be generally comparable to make this work keyBy : Constructs twocomponent tuples (key-value pairs) by applying a function on each data item. Trie result of the function becomes the key and the original data item becomes the value of the newly created tuples.
Question # 7
Problem Scenario 7 : You have been given following mysql database details as well as other info. user=retail_dba password=cloudera database=retail_db jdbc URL = jdbc:mysql://quickstart:3306/retail_db Please accomplish following. 1. Import department tables using your custom boundary query, which import departments between 1 to 25. 2. Also make sure each tables file is partitioned in 2 files e.g. part-00000, part-00002 3. Also make sure you have imported only two columns from table, which are department_id,department_name |
Answer: See the explanation for Step by Step Solution and configuration. Explanation: Solutions : Step 1 : Clean the hdfs tile system, if they exists clean out. hadoop fs -rm -R departments hadoop fs -rm -R categories hadoop fs -rm -R products hadoop fs -rm -R orders hadoop fs -rm -R order_itmes hadoop fs -rm -R customers Step 2 : Now import the department table as per requirement. sqoop import \ -connect jdbc:mysql://quickstart:3306/retail_db \ -username=retail_dba \ -password=cloudera \ -table departments \ -target-dir /user/cloudera/departments \ -m2\ -boundary-query "select 1, 25 from departments" \ -columns department_id,department_name Step 3 : Check imported data. hdfs dfs -Is departments hdfs dfs -cat departments/part-m-00000 hdfs dfs -cat departments/part-m-00001
Cloudera CCA175 Exam Dumps
5 out of 5
Pass Your CCA Spark and Hadoop Developer Exam Exam in First Attempt With CCA175 Exam Dumps. Real CCA Spark and Hadoop Developer Exam Questions As in Actual Exam!
— 96 Questions With Valid Answers
— Updation Date : 15-Apr-2025
— Free CCA175 Updates for 90 Days
— 98% CCA Spark and Hadoop Developer Exam Exam Passing Rate
PDF Only Price 49.99$
19.99$
Buy PDF
Speciality
Additional Information
Testimonials
Related Exams
- Number 1 Cloudera CCA Spark and Hadoop Developer study material online
- Regular CCA175 dumps updates for free.
- CCA Spark and Hadoop Developer Exam Practice exam questions with their answers and explaination.
- Our commitment to your success continues through your exam with 24/7 support.
- Free CCA175 exam dumps updates for 90 days
- 97% more cost effective than traditional training
- CCA Spark and Hadoop Developer Exam Practice test to boost your knowledge
- 100% correct CCA Spark and Hadoop Developer questions answers compiled by senior IT professionals
Cloudera CCA175 Braindumps
Realbraindumps.com is providing CCA Spark and Hadoop Developer CCA175 braindumps which are accurate and of high-quality verified by the team of experts. The Cloudera CCA175 dumps are comprised of CCA Spark and Hadoop Developer Exam questions answers available in printable PDF files and online practice test formats. Our best recommended and an economical package is CCA Spark and Hadoop Developer PDF file + test engine discount package along with 3 months free updates of CCA175 exam questions. We have compiled CCA Spark and Hadoop Developer exam dumps question answers pdf file for you so that you can easily prepare for your exam. Our Cloudera braindumps will help you in exam. Obtaining valuable professional Cloudera CCA Spark and Hadoop Developer certifications with CCA175 exam questions answers will always be beneficial to IT professionals by enhancing their knowledge and boosting their career.
Yes, really its not as tougher as before. Websites like Realbraindumps.com are playing a significant role to make this possible in this competitive world to pass exams with help of CCA Spark and Hadoop Developer CCA175 dumps questions. We are here to encourage your ambition and helping you in all possible ways. Our excellent and incomparable Cloudera CCA Spark and Hadoop Developer Exam exam questions answers study material will help you to get through your certification CCA175 exam braindumps in the first attempt.
Pass Exam With Cloudera CCA Spark and Hadoop Developer Dumps. We at Realbraindumps are committed to provide you CCA Spark and Hadoop Developer Exam braindumps questions answers online. We recommend you to prepare from our study material and boost your knowledge. You can also get discount on our Cloudera CCA175 dumps. Just talk with our support representatives and ask for special discount on CCA Spark and Hadoop Developer exam braindumps. We have latest CCA175 exam dumps having all Cloudera CCA Spark and Hadoop Developer Exam dumps questions written to the highest standards of technical accuracy and can be instantly downloaded and accessed by the candidates when once purchased. Practicing Online CCA Spark and Hadoop Developer CCA175 braindumps will help you to get wholly prepared and familiar with the real exam condition. Free CCA Spark and Hadoop Developer exam braindumps demos are available for your satisfaction before purchase order.
Send us mail if you want to check Cloudera CCA175 CCA Spark and Hadoop Developer Exam DEMO before your purchase and our support team will send you in email.
If you don't find your dumps here then you can request what you need and we shall provide it to you.
Bulk Packages
$50
- Get 3 Exams PDF
- Get $33 Discount
- Mention Exam Codes in Payment Description.
Buy 3 Exams PDF
$70
- Get 5 Exams PDF
- Get $65 Discount
- Mention Exam Codes in Payment Description.
Buy 5 Exams PDF
$100
- Get 5 Exams PDF + Test Engine
- Get $105 Discount
- Mention Exam Codes in Payment Description.
Buy 5 Exams PDF + Engine
 Jessica Doe
CCA Spark and Hadoop Developer
We are providing Cloudera CCA175 Braindumps with practice exam question answers. These will help you to prepare your CCA Spark and Hadoop Developer Exam exam. Buy CCA Spark and Hadoop Developer CCA175 dumps and boost your knowledge.
|