Hadoop/Map-reduce https://www.guru99.com/create-your-first-hadoop-program.html Some Hadoop commands * https://community.cloudera.com/t5/Support-Questions/Closed-How-to-store-output-of-shell-script-in-HDFS/td-p/229933 * https://stackoverflow.com/questions/26513861/checking-if-directory-in-hdfs-already-exists-or-not -------------- To run firefox/anything graphical inside the VM run by vagrant, have to ssh -Y onto both analytics and then to the vagrant VM from analytics: 1. ssh analytics -Y 2. [anupama@analytics vagrant-hadoop-hive-spark]$ vagrant ssh -- -Y or vagrant ssh -- -Y node1 (the -- flag tells the vagrant command that the subsequent -Y flag should be passed to the ssh cmd that vagrant runs) Only once ssh-ed with vagrant into the VM whose hostname is "node1", do you have access to node1's assigned IP: 10.211.55.101 - Connecting machines, like analytics, must access node1 or use port forwarding to view the VM's servers on localhost. For example, on analytics, can view Yarn pages at http://localhost:8088/ - If firefox is launched inside the VM (so inside node1), then can access pages off their respective ports at any of localhost|10.211.55.101|node1. =========================================== WARC TO WET =========================================== https://groups.google.com/forum/#!topic/common-crawl/hsb90GHq6to Sebastian Nagel 05/07/2017 Hi, unfortunately, we do not provide (yet) text extracts of the CC-NEWS archives. But it's easy to run the WET extractor on the WARC files, see: https://groups.google.com/d/topic/common-crawl/imv4hlLob4s/discussion https://groups.google.com/d/topic/common-crawl/b6yDG7EmnhM/discussion That's what you have to do: # download the WARC files and place them in a directory "warc/" # create sibling folders wat and wet # | # |-- warc/ # | |-- CC-NEWS-20161001224340-00008.warc.gz # | |-- CC-NEWS-20161017145313-00000.warc.gz # | `-- ... # | # |-- wat/ # | # `-- wet/ git clone https://github.com/commoncrawl/ia-web-commons cd ia-web-commons mvn install cd .. git clone https://github.com/commoncrawl/ia-hadoop-tools cd ia-hadoop-tools mvn package java -jar $PWD/target/ia-hadoop-tools-jar-with-dependencies.jar WEATGenerator \ -strictMode -skipExisting batch-id-xyz .../warc/*.warc.gz The folders wat/ and wet/ will then contain the exports. Best, Sebastian --- 1. So following the above instructions, I first made a warc subfolder in hdfs:///user/vagrant/cc-mri-subset/ Then moved all the downloaded *warc.gz into there. Then created wat and wet subfolders in there alongside the warc folder. 2. Next, I did the 2 git clone and mvn compile operations above. The first, ia-web-commons, successfully compiled (despite some test failures) 3. BUT THE 2ND GIT PROJECT, ia-hadoop-tools, DIDN'T COMPILE AT FIRST, with the mvn package command failing: git clone https://github.com/commoncrawl/ia-hadoop-tools cd ia-hadoop-tools mvn package Compile failed with a message about the JSONTokener constructor not taking a String object. It turned out that the JSONTokener used was a version of the class that was too old. Whereas the necessary constructor is present in the most recent version, as seen in the API at https://docs.oracle.com/cd/E51273_03/ToolsAndFrameworks.110/apidoc/assembler/com/endeca/serialization/json/JSONTokener.html So instead, I opened up ia-hadoop-tools/pom.xml for editing and added the newest version of the org.json package's json (see http://builds.archive.org/maven2/org/json/json/ for ) into the pom.xml's element, based on how it this was done at https://bukkit.org/threads/problem-loading-libraries-with-maven-java-lang-noclassdeffounderror-org-json-jsonobject.428354/: org.json json 20131018 Then I was able to run "mvn package" successfully. (Maybe I could also have added in a far more recent version, as seen in the version numbers at https://mvnrepository.com/artifact/org.json/json, but didn't want to go too far ahead in case there was other incompatibility.) 4. Next, I wanted to finally run the built executable to convert the warc files to wet files. I had the warc files on the hadoop filesystem. The original instructions at https://groups.google.com/forum/#!topic/common-crawl/hsb90GHq6to however were apparently for working with warcs stored on the local filesystem, as those instructions did not run the hadoop command but the regular java command. The regular java command did not work with the files being on the hadoop system (attempt #1 below). ATTEMPTS THAT DIDN'T WORK: 1. vagrant@node1:~/ia-hadoop-tools$ java -jar $PWD/target/ia-hadoop-tools-jar-with-dependencies.jar WEATGenerator -strictMode -skipExisting batch-id-xyz hdfs:///user/vagrant/cc-mri-subset/warc/*.warc.gz 2. vagrant@node1:~/ia-hadoop-tools$ $HADOOP_MAPRED_HOME/bin/hadoop jar $PWD/target/ia-hadoop-tools-jar-with-dependencies.jar WEATGenerator -strictMode -skipExisting batch-id-xyz hdfs:///user/vagrant/cc-mri-subset/warc/*.warc.gz The 2nd attempt, which uses a proper hadoop command, I based off https://www.tutorialspoint.com/map_reduce/implementation_in_hadoop.htm It produced lots of errors and the output wet (and wat) .gz files were all corrupt as gunzip could not successfully run over them: vagrant@node1:~/ia-hadoop-tools$ $HADOOP_MAPRED_HOME/bin/hadoop jar $PWD/target/ia-hadoop-tools-jar-with-dependencies.jar WEATGenerator -strictMode -skipExisting batch-id-xyz hdfs:///user/vagrant/cc-mri-subset/warc/*.warc.gz 19/09/05 05:57:22 INFO Configuration.deprecation: mapred.task.timeout is deprecated. Instead, use mapreduce.task.timeout 19/09/05 05:57:23 INFO jobs.WEATGenerator: Add input path: hdfs://node1/user/vagrant/cc-mri-subset/warc/MAORI-CC-2019-30-20190902100139-000000.warc.gz 19/09/05 05:57:23 INFO jobs.WEATGenerator: Add input path: hdfs://node1/user/vagrant/cc-mri-subset/warc/MAORI-CC-2019-30-20190902100141-000001.warc.gz 19/09/05 05:57:23 INFO jobs.WEATGenerator: Add input path: hdfs://node1/user/vagrant/cc-mri-subset/warc/MAORI-CC-2019-30-20190902100451-000002.warc.gz 19/09/05 05:57:23 INFO jobs.WEATGenerator: Add input path: hdfs://node1/user/vagrant/cc-mri-subset/warc/MAORI-CC-2019-30-20190902100453-000003.warc.gz 19/09/05 05:57:23 INFO jobs.WEATGenerator: Add input path: hdfs://node1/user/vagrant/cc-mri-subset/warc/MAORI-CC-2019-30-20190902100805-000004.warc.gz 19/09/05 05:57:23 INFO jobs.WEATGenerator: Add input path: hdfs://node1/user/vagrant/cc-mri-subset/warc/MAORI-CC-2019-30-20190902100809-000005.warc.gz 19/09/05 05:57:23 INFO jobs.WEATGenerator: Add input path: hdfs://node1/user/vagrant/cc-mri-subset/warc/MAORI-CC-2019-30-20190902101119-000006.warc.gz 19/09/05 05:57:23 INFO jobs.WEATGenerator: Add input path: hdfs://node1/user/vagrant/cc-mri-subset/warc/MAORI-CC-2019-30-20190902101119-000007.warc.gz 19/09/05 05:57:23 INFO jobs.WEATGenerator: Add input path: hdfs://node1/user/vagrant/cc-mri-subset/warc/MAORI-CC-2019-30-20190902101429-000008.warc.gz 19/09/05 05:57:23 INFO jobs.WEATGenerator: Add input path: hdfs://node1/user/vagrant/cc-mri-subset/warc/MAORI-CC-2019-30-20190902101429-000009.warc.gz 19/09/05 05:57:23 INFO client.RMProxy: Connecting to ResourceManager at node1/127.0.0.1:8032 19/09/05 05:57:23 INFO client.RMProxy: Connecting to ResourceManager at node1/127.0.0.1:8032 19/09/05 05:57:23 INFO mapred.FileInputFormat: Total input paths to process : 10 19/09/05 05:57:24 INFO mapreduce.JobSubmitter: number of splits:10 19/09/05 05:57:24 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1567397114047_0001 19/09/05 05:57:24 INFO impl.YarnClientImpl: Submitted application application_1567397114047_0001 19/09/05 05:57:24 INFO mapreduce.Job: The url to track the job: http://node1:8088/proxy/application_1567397114047_0001/ 19/09/05 05:57:24 INFO mapreduce.Job: Running job: job_1567397114047_0001 19/09/05 05:57:31 INFO mapreduce.Job: Job job_1567397114047_0001 running in uber mode : false 19/09/05 05:57:31 INFO mapreduce.Job: map 0% reduce 0% 19/09/05 05:57:44 INFO mapreduce.Job: map 10% reduce 0% 19/09/05 05:57:44 INFO mapreduce.Job: Task Id : attempt_1567397114047_0001_m_000002_0, Status : FAILED Error: com.google.common.io.ByteStreams.limit(Ljava/io/InputStream;J)Ljava/io/InputStream; Container killed by the ApplicationMaster. Container killed on request. Exit code is 143 Container exited with a non-zero exit code 143 19/09/05 05:57:45 INFO mapreduce.Job: Task Id : attempt_1567397114047_0001_m_000004_0, Status : FAILED Error: com.google.common.io.ByteStreams.limit(Ljava/io/InputStream;J)Ljava/io/InputStream; Container killed by the ApplicationMaster. Container killed on request. Exit code is 143 Container exited with a non-zero exit code 143 19/09/05 05:57:45 INFO mapreduce.Job: Task Id : attempt_1567397114047_0001_m_000001_0, Status : FAILED Error: com.google.common.io.ByteStreams.limit(Ljava/io/InputStream;J)Ljava/io/InputStream; 19/09/05 05:57:45 INFO mapreduce.Job: Task Id : attempt_1567397114047_0001_m_000005_0, Status : FAILED Error: com.google.common.io.ByteStreams.limit(Ljava/io/InputStream;J)Ljava/io/InputStream; 19/09/05 05:57:45 INFO mapreduce.Job: Task Id : attempt_1567397114047_0001_m_000000_0, Status : FAILED Error: com.google.common.io.ByteStreams.limit(Ljava/io/InputStream;J)Ljava/io/InputStream; 19/09/05 05:57:45 INFO mapreduce.Job: Task Id : attempt_1567397114047_0001_m_000003_0, Status : FAILED Error: com.google.common.io.ByteStreams.limit(Ljava/io/InputStream;J)Ljava/io/InputStream; 19/09/05 05:57:46 INFO mapreduce.Job: map 0% reduce 0% 19/09/05 05:57:54 INFO mapreduce.Job: Task Id : attempt_1567397114047_0001_m_000007_0, Status : FAILED Error: com.google.common.io.ByteStreams.limit(Ljava/io/InputStream;J)Ljava/io/InputStream; 19/09/05 05:57:55 INFO mapreduce.Job: Task Id : attempt_1567397114047_0001_m_000006_0, Status : FAILED Error: com.google.common.io.ByteStreams.limit(Ljava/io/InputStream;J)Ljava/io/InputStream; 19/09/05 05:57:57 INFO mapreduce.Job: map 10% reduce 0% 19/09/05 05:57:57 INFO mapreduce.Job: Task Id : attempt_1567397114047_0001_m_000009_0, Status : FAILED Error: com.google.common.io.ByteStreams.limit(Ljava/io/InputStream;J)Ljava/io/InputStream; Container killed by the ApplicationMaster. Container killed on request. Exit code is 143 Container exited with a non-zero exit code 143 19/09/05 05:57:58 INFO mapreduce.Job: map 20% reduce 0% 19/09/05 05:57:58 INFO mapreduce.Job: Task Id : attempt_1567397114047_0001_m_000008_0, Status : FAILED Error: com.google.common.io.ByteStreams.limit(Ljava/io/InputStream;J)Ljava/io/InputStream; 19/09/05 05:58:06 INFO mapreduce.Job: map 30% reduce 0% 19/09/05 05:58:08 INFO mapreduce.Job: map 60% reduce 0% 19/09/05 05:58:09 INFO mapreduce.Job: map 70% reduce 0% 19/09/05 05:58:10 INFO mapreduce.Job: map 80% reduce 0% 19/09/05 05:58:12 INFO mapreduce.Job: map 90% reduce 0% 19/09/05 05:58:13 INFO mapreduce.Job: map 100% reduce 0% 19/09/05 05:58:13 INFO mapreduce.Job: Job job_1567397114047_0001 completed successfully 19/09/05 05:58:13 INFO mapreduce.Job: Counters: 32 File System Counters FILE: Number of bytes read=0 FILE: Number of bytes written=1239360 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 HDFS: Number of bytes read=1430 HDFS: Number of bytes written=0 HDFS: Number of read operations=30 HDFS: Number of large read operations=0 HDFS: Number of write operations=0 Job Counters Failed map tasks=10 Launched map tasks=20 Other local map tasks=10 Data-local map tasks=10 Total time spent by all maps in occupied slots (ms)=208160 Total time spent by all reduces in occupied slots (ms)=0 Total time spent by all map tasks (ms)=208160 Total vcore-milliseconds taken by all map tasks=208160 Total megabyte-milliseconds taken by all map tasks=213155840 Map-Reduce Framework Map input records=10 Map output records=0 Input split bytes=1430 Spilled Records=0 Failed Shuffles=0 Merged Map outputs=0 GC time elapsed (ms)=1461 CPU time spent (ms)=2490 Physical memory (bytes) snapshot=1564528640 Virtual memory (bytes) snapshot=19642507264 Total committed heap usage (bytes)=1126170624 File Input Format Counters Bytes Read=0 File Output Format Counters Bytes Written=0 vagrant@node1:~/ia-hadoop-tools$ 5. The error messages are all the same but not very informative 19/09/05 05:57:45 INFO mapreduce.Job: Task Id : attempt_1567397114047_0001_m_000001_0, Status : FAILED Error: com.google.common.io.ByteStreams.limit(Ljava/io/InputStream;J)Ljava/io/InputStream; All the references I could find on google indicated that the full version of the error message was that this method (com.google.common.io.ByteStreams.limit(...)) could not be located. The page at http://mail-archives.apache.org/mod_mbox/spark-user/201510.mbox/%3C7AD5ED91-F8BA-43C9-9030-049BAB360BD4@gmail.com%3E revealed that guava.jar contains the com.google.common.io.ByteStreams class. TO GET THE EXECUTABLE TO WORK: I located guava.jar, found there were 2 identical ones on the filesystem but that neither was on the hadoop classpath yet, so I copied it into one of the Hadoop Classpath locations. Then I was able to successfully run the executable and produce meaningful WET files at last from the WARC input files: vagrant@node1:~$ locate guava.jar /usr/share/java/guava.jar /usr/share/maven/lib/guava.jar vagrant@node1:~$ jar -tvf /usr/share/maven/lib/guava.jar | less vagrant@node1:~$ jar -tvf /usr/share/java/guava.jar | less # both contained the ByteStreams class vagrant@node1:~$ cd - /home/vagrant/ia-hadoop-tools vagrant@node1:~/ia-hadoop-tools$ find . -name "guava.jar" # None in the git project vagrant@node1:~/ia-hadoop-tools$ hadoop classpath /usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/*:/usr/local/hadoop/share/hadoop/common/*:/usr/local/hadoop/share/hadoop/hdfs:/usr/local/hadoop/share/hadoop/hdfs/lib/*:/usr/local/hadoop/share/hadoop/hdfs/*:/usr/local/hadoop/share/hadoop/yarn/lib/*:/usr/local/hadoop/share/hadoop/yarn/*:/usr/local/hadoop/share/hadoop/mapreduce/lib/*:/usr/local/hadoop/share/hadoop/mapreduce/*:/contrib/capacity-scheduler/*.jar # guava.jar not on hadoop classpath yet vagrant@node1:~/ia-hadoop-tools$ diff /usr/share/java/guava.jar /usr/share/maven/lib/guava.jar # no differences, identical vagrant@node1:~/ia-hadoop-tools$ hdfs dfs -put /usr/share/java/guava.jar /usr/local/hadoop/share/hadoop/common/. put: `/usr/local/hadoop/share/hadoop/common/.': No such file or directory # hadoop classpath locations are not on the hdfs filesystem, but on the regular fs vagrant@node1:~/ia-hadoop-tools$ sudo cp /usr/share/java/guava.jar /usr/local/hadoop/share/hadoop/common/. vagrant@node1:~/ia-hadoop-tools$ hadoop classpath /usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/*:/usr/local/hadoop/share/hadoop/common/*:/usr/local/hadoop/share/hadoop/hdfs:/usr/local/hadoop/share/hadoop/hdfs/lib/*:/usr/local/hadoop/share/hadoop/hdfs/*:/usr/local/hadoop/share/hadoop/yarn/lib/*:/usr/local/hadoop/share/hadoop/yarn/*:/usr/local/hadoop/share/hadoop/mapreduce/lib/*:/usr/local/hadoop/share/hadoop/mapreduce/*:/contrib/capacity-scheduler/*.jar # Copied guava.jar to somewhere on existing hadoop classpath vagrant@node1:~/ia-hadoop-tools$ $HADOOP_MAPRED_HOME/bin/hadoop jar $PWD/target/ia-hadoop-tools-jar-with-dependencies.jar WEATGenerator -strictMode -skipExisting batch-id-xyz hdfs:///user/vagrant/cc-mri-subset/warc/*.warc.gz # Successful run vagrant@node1:~$ hdfs dfs -get /user/vagrant/cc-mri-subset/wet/MAORI-CC-2019-30-20190902100139-000000.warc.wet.gz /home/vagrant/. vagrant@node1:~$ cd .. vagrant@node1:~$ gunzip MAORI-CC-2019-30-20190902100139-000000.warc.wet.gz vagrant@node1:~$ less MAORI-CC-2019-30-20190902100139-000000.warc.wet # Copied a WET output file from the hadoop filesystem to local filesystem and inspected its contents. Works! ----------------------------------- VIEW THE MRI-ONLY INDEX GENERATED ----------------------------------- hdfs dfs -cat hdfs:///user/vagrant/cc-mri-csv/part* | tail -5 (gz archive, binary file) vagrant@node1:~/cc-index-table/src/script$ hdfs dfs -mkdir hdfs:///user/vagrant/cc-mri-unzipped-csv # https://stackoverflow.com/questions/34573279/how-to-unzip-gz-files-in-a-new-directory-in-hadoop XXX vagrant@node1:~/cc-index-table/src/script$ hadoop fs -cat hdfs:///user/vagrant/cc-mri-csv/part* | gzip -d | hadoop fs -put - hdfs:///user/vagrant/cc-mri-unzipped-csv vagrant@node1:~/cc-index-table/src/script$ hdfs dfs -cat hdfs:///user/vagrant/cc-mri-csv/part* | gzip -d | hdfs dfs -put - hdfs:///user/vagrant/cc-mri-unzipped-csv/cc-mri.csv vagrant@node1:~/cc-index-table/src/script$ hdfs dfs -ls hdfs:///user/vagrant/cc-mri-unzipped-csv Found 1 items -rw-r--r-- 1 vagrant supergroup 71664603 2019-08-29 04:47 hdfs:///user/vagrant/cc-mri-unzipped-csv/cc-mri.csv # https://stackoverflow.com/questions/14925323/view-contents-of-file-in-hdfs-hadoop vagrant@node1:~/cc-index-table/src/script$ hdfs dfs -cat hdfs:///user/vagrant/cc-mri-unzipped-csv/cc-mri.csv | tail -5 # url, warc_filename, warc_record_offset, warc_record_length http://paupauocean.com/page91?product_id=142&brd=1,crawl-data/CC-MAIN-2019-30/segments/1563195526940.0/warc/CC-MAIN-20190721082354-20190721104354-00088.warc.gz,115081770,21404 https://cookinseln-reisen.de/cook-inseln/rarotonga/,crawl-data/CC-MAIN-2019-30/segments/1563195526799.4/warc/CC-MAIN-20190720235054-20190721021054-00289.warc.gz,343512295,12444 http://www.halopharm.com/mi/profile/,crawl-data/CC-MAIN-2019-30/segments/1563195525500.21/warc/CC-MAIN-20190718042531-20190718064531-00093.warc.gz,219160333,10311 https://www.firstpeople.us/pictures/green/Touched-by-the-hand-of-Time-1907.html,crawl-data/CC-MAIN-2019-30/segments/1563195526670.1/warc/CC-MAIN-20190720194009-20190720220009-00362.warc.gz,696195242,5408 https://www.sos-accessoire.com/programmateur-programmateur-module-electronique-whirlpool-481231028062-27573.html,crawl-data/CC-MAIN-2019-30/segments/1563195527048.80/warc/CC-MAIN-20190721144008-20190721170008-00164.warc.gz,830087190,26321 # https://stackoverflow.com/questions/32612867/how-to-count-lines-in-a-file-on-hdfs-command vagrant@node1:~/cc-index-table/src/script$ hdfs dfs -cat hdfs:///user/vagrant/cc-mri-unzipped-csv/cc-mri.csv | wc -l 345625 ANOTHER WAY (DR BAINBRIDGE'S WAY) TO CREATE SINGLE .CSV FILE FROM /part* FILES AND VIEW ITS CONTENTS: vagrant@node1:~/cc-index-table$ hdfs dfs -cat hdfs:///user/vagrant/cc-mri-csv/part* > file.csv.gz vagrant@node1:~/cc-index-table$ less file.csv.gz https://www.patricia-anong.com/blog/2017/11/1/extend-vmdk-on-virtualbox When not using LIKE '%mri%' but = 'mri' instead: vagrant@node1:~/cc-index-table$ hdfs dfs -cat hdfs:///user/vagrant/cc-mri-unzipped-csv/cc-mri.csv | wc -l 5767 For a month later, the August 2019 crawl: vagrant@node1:~$ hdfs dfs -cat hdfs:///user/vagrant/CC-MAIN-2019-35/cc-mri-unzipped-csv/cc-mri.csv | wc -l 9318 ----------------------------------------- Running export_mri_subset.sh ----------------------------------------- The export_mri_subset.sh script is set up run on the csv input file produced by running export_mri_index_csv.sh Running this initially produced the following exception: 2019-08-29 05:48:52 INFO CCIndexExport:152 - Number of records/rows matched by query: 345624 2019-08-29 05:48:52 INFO CCIndexExport:157 - Distributing 345624 records to 70 output partitions (max. 5000 records per WARC file) 2019-08-29 05:48:52 INFO CCIndexExport:165 - Repartitioning data to 70 output partitions Exception in thread "main" org.apache.spark.sql.AnalysisException: cannot resolve '`url`' given input columns: [http://176.31.110.213:600/?p=287, crawl-data/CC-MAIN-2019-30/segments/1563195527531.84/warc/CC-MAIN-20190722051628-20190722073628-00547.warc.gz, 1215489, 15675];; 'Project ['url, 'warc_filename, 'warc_record_offset, 'warc_record_length] +- AnalysisBarrier +- Repartition 70, true +- Relation[http://176.31.110.213:600/?p=287#10,crawl-data/CC-MAIN-2019-30/segments/1563195527531.84/warc/CC-MAIN-20190722051628-20190722073628-00547.warc.gz#11,1215489#12,15675#13] csv at org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.failAnalysis(package.scala:42) at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1$$anonfun$apply$2.applyOrElse(CheckAnalysis.scala:88) at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1$$anonfun$apply$2.applyOrElse(CheckAnalysis.scala:85) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformUp$1.apply(TreeNode.scala:289) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformUp$1.apply(TreeNode.scala:289) at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70) at org.apache.spark.sql.catalyst.trees.TreeNode.transformUp(TreeNode.scala:288) at org.apache.spark.sql.catalyst.plans.QueryPlan$$anonfun$transformExpressionsUp$1.apply(QueryPlan.scala:95) at org.apache.spark.sql.catalyst.plans.QueryPlan$$anonfun$transformExpressionsUp$1.apply(QueryPlan.scala:95) at org.apache.spark.sql.catalyst.plans.QueryPlan.transformExpression$1(QueryPlan.scala:106) at org.apache.spark.sql.catalyst.plans.QueryPlan.org$apache$spark$sql$catalyst$plans$QueryPlan$$recursiveTransform$1(QueryPlan.scala:116) at org.apache.spark.sql.catalyst.plans.QueryPlan$$anonfun$org$apache$spark$sql$catalyst$plans$QueryPlan$$recursiveTransform$1$1.apply(QueryPlan.scala:120) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) at scala.collection.TraversableLike$class.map(TraversableLike.scala:234) at scala.collection.AbstractTraversable.map(Traversable.scala:104) at org.apache.spark.sql.catalyst.plans.QueryPlan.org$apache$spark$sql$catalyst$plans$QueryPlan$$recursiveTransform$1(QueryPlan.scala:120) at org.apache.spark.sql.catalyst.plans.QueryPlan$$anonfun$1.apply(QueryPlan.scala:125) at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:187) at org.apache.spark.sql.catalyst.plans.QueryPlan.mapExpressions(QueryPlan.scala:125) at org.apache.spark.sql.catalyst.plans.QueryPlan.transformExpressionsUp(QueryPlan.scala:95) at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1.apply(CheckAnalysis.scala:85) at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1.apply(CheckAnalysis.scala:80) at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:127) at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$class.checkAnalysis(CheckAnalysis.scala:80) at org.apache.spark.sql.catalyst.analysis.Analyzer.checkAnalysis(Analyzer.scala:91) at org.apache.spark.sql.catalyst.analysis.Analyzer.executeAndCheck(Analyzer.scala:104) at org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:57) at org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:55) at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:47) at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:74) at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$withPlan(Dataset.scala:3295) at org.apache.spark.sql.Dataset.select(Dataset.scala:1307) at org.apache.spark.sql.Dataset.select(Dataset.scala:1325) at org.apache.spark.sql.Dataset.select(Dataset.scala:1325) at org.commoncrawl.spark.examples.CCIndexWarcExport.run(CCIndexWarcExport.java:169) at org.commoncrawl.spark.examples.CCIndexExport.run(CCIndexExport.java:192) at org.commoncrawl.spark.examples.CCIndexWarcExport.main(CCIndexWarcExport.java:214) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52) at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:879) at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:197) at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:227) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:136) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) 2019-08-29 05:48:52 INFO SparkContext:54 - Invoking stop() from shutdown hook Hints to solve it were at https://stackoverflow.com/questions/45972929/scala-dataframereader-keep-column-headers The actual solution is to edit the CCIndexWarcExport.java as follows: 1. set option(header) to false since the csv file contains no header row, only data rows. You can confirm the csv has no header row by doing hdfs dfs -cat hdfs:///user/vagrant/cc-mri-csv/part* | head -5 2. The 4 column names are inferred as _c0 to _c3, not as url/warc_filename etc. emacs src/main/java/org/commoncrawl/spark/examples/CCIndexWarcExport.java Change: sqlDF = sparkSession.read().format("csv").option("header", true).option("inferSchema", true) .load(csvQueryResult); To sqlDF = sparkSession.read().format("csv").option("header", false).option("inferSchema", true) .load(csvQueryResult); And comment out: //JavaRDD rdd = sqlDF.select("url", "warc_filename", "warc_record_offset", "warc_record_length").rdd() .toJavaRDD(); Replace with the default inferred column names: JavaRDD rdd = sqlDF.select("_c0", "_c1", "_c2", "_c3").rdd() .toJavaRDD(); Now recompile: mvn package And run: ./src/script/export_mri_subset.sh ------------------------- WET example from https://github.com/commoncrawl/cc-warc-examples vagrant@node1:~/cc-warc-examples$ hdfs dfs -mkdir /user/vagrant/data vagrant@node1:~/cc-warc-examples$ hdfs dfs -put data/CC-MAIN-20190715175205-20190715200159-00000.warc.wet.gz hdfs:///user/vagrant/data/. vagrant@node1:~/cc-warc-examples$ hdfs dfs -ls data Found 1 items -rw-r--r-- 1 vagrant supergroup 154823265 2019-08-19 08:23 data/CC-MAIN-20190715175205-20190715200159-00000.warc.wet.gz vagrant@node1:~/cc-warc-examples$ hadoop jar target/cc-warc-examples-0.3-SNAPSHOT-jar-with-dependencies.jar org.commoncrawl.examples.mapreduce.WETWordCount vagrant@node1:~/cc-warc-examples$ hdfs dfs -cat /tmp/cc/part* INFO ON HADOOP/HDFS: https://www.bluedata.com/blog/2016/08/hadoop-hdfs-upgrades-painful/ SPARK: configure option example: https://stackoverflow.com/questions/46152526/how-should-i-configure-spark-to-correctly-prune-hive-metastore-partitions LIKE '%isl%' cd cc-index-table APPJAR=target/cc-spark-0.2-SNAPSHOT-jar-with-dependencies.jar > $SPARK_HOME/bin/spark-submit \ # $SPARK_ON_YARN \ --conf spark.hadoop.parquet.enable.dictionary=true \ --conf spark.hadoop.parquet.enable.summary-metadata=false \ --conf spark.sql.hive.metastorePartitionPruning=true \ --conf spark.sql.parquet.filterPushdown=true \ --conf spark.sql.parquet.mergeSchema=true \ --class org.commoncrawl.spark.examples.CCIndexWarcExport $APPJAR \ --query "SELECT url, warc_filename, warc_record_offset, warc_record_length FROM ccindex WHERE crawl = 'CC-MAIN-2019-30' AND subset = 'warc' AND content_languages LIKE '%mri%'" \ --numOutputPartitions 12 \ --numRecordsPerWarcFile 20000 \ --warcPrefix ICELANDIC-CC-2018-43 \ s3://commoncrawl/cc-index/table/cc-main/warc/ \ .../my_output_path/ ---- TIME ---- 1. https://dzone.com/articles/need-billions-of-web-pages-dont-bother-crawling http://digitalpebble.blogspot.com/2017/03/need-billions-of-web-pages-dont-bother_29.html "So, not only have CommonCrawl given you loads of web data for free, they’ve also made your life easier by preprocessing the data for you. For many tasks, the content of the WAT or WET files will be sufficient and you won’t have to process the WARC files. This should not only help you simplify your code but also make the whole processing faster. We recently ran an experiment on CommonCrawl where we needed to extract anchor text from HTML pages. We initially wrote some MapReduce code to extract the binary content of the pages from their WARC representation, processed the HTML with JSoup and reduced on the anchor text. Processing a single WARC segment took roughly 100 minutes on a 10-node EMR cluster. We then simplified the extraction logic, took the WAT files as input and the processing time dropped to 17 minutes on the same cluster. This gain was partly due to not having to parse the web pages, but also to the fact that WAT files are a lot smaller than their WARC counterparts." 2. https://spark-in.me/post/parsing-common-crawl-in-two-simple-commands "Both the parsing part and the processing part take just a couple of minutes per index file / WET file - the bulk of the “compute” lies within actually downloading these files. Essentially if you have some time to spare and an unlimited Internet connection, all of this processing can be done on one powerful machine. You can be fancy and go ahead and rent some Amazon server(s) to minimize the download time, but that can be costly. In my experience - parsing the whole index for Russian websites (just filtering by language) takes approximately 140 hours - but the majority of this time is just downloading (my speed averaged ~300-500 kb/s)." ---- CMDS ---- https://stackoverflow.com/questions/29565716/spark-kill-running-application ========================================================= Configuring spark to work on Amazon AWS s3a dataset: ========================================================= https://stackoverflow.com/questions/30385981/how-to-access-s3a-files-from-apache-spark http://deploymentzone.com/2015/12/20/s3a-on-spark-on-aws-ec2/ https://answers.dataiku.com/1734/common-crawl-s3 https://stackoverflow.com/questions/2354525/what-should-be-hadoop-tmp-dir https://stackoverflow.com/questions/40169610/where-exactly-should-hadoop-tmp-dir-be-set-core-site-xml-or-hdfs-site-xml https://stackoverflow.com/questions/43759896/spark-truncated-the-string-representation-of-a-plan-since-it-was-too-large-w https://sparkour.urizone.net/recipes/using-s3/ Configuring Spark to Use Amazon S3 "Some Spark tutorials show AWS access keys hardcoded into the file paths. This is a horribly insecure approach and should never be done. Use exported environment variables or IAM Roles instead, as described in Configuring Amazon S3 as a Spark Data Source." "No FileSystem for scheme: s3n java.io.IOException: No FileSystem for scheme: s3n This message appears when dependencies are missing from your Apache Spark distribution. If you see this error message, you can use the --packages parameter and Spark will use Maven to locate the missing dependencies and distribute them to the cluster. Alternately, you can use --jars if you manually downloaded the dependencies already. These parameters also works on the spark-submit script." =========================================== IAM Role (or user) and commoncrawl profile =========================================== "iam" role or user for commoncrawl(er) profile aws management console: davidb@waikato.ac.nz lab pwd, capital R and ! (maybe g) commoncrawl profile created while creating the user/role, by following the instructions at: https://answers.dataiku.com/1734/common-crawl-s3 https://answers.dataiku.com/1734/common-crawl-s3 Even though the bucket is public, if your AWS key does not have your full permissions (ie if it's a restricted IAM user), you need to grant explicit access to the commoncrawl bucket: attach the following policy to your IAM user: #### START JSON (POLICY) ### { "Version": "2012-10-17", "Statement": [ { "Sid": "Stmt1503647467000", "Effect": "Allow", "Action": [ "s3:GetObject", "s3:ListBucket" ], "Resource": [ "arn:aws:s3:::commoncrawl/*", "arn:aws:s3:::commoncrawl" ] } ] } #### END ### [If accesskey and secret were specified in hadoop core-site.xml and not in spark conf props file, then running export_maori_index_csv.sh produced the following error: 2019-08-29 06:16:38 INFO StateStoreCoordinatorRef:54 - Registered StateStoreCoordinator endpoint 2019-08-29 06:16:40 WARN FileStreamSink:66 - Error while looking for metadata directory. Exception in thread "main" com.amazonaws.AmazonClientException: Unable to load AWS credentials from any provider in the chain at com.amazonaws.auth.AWSCredentialsProviderChain.getCredentials(AWSCredentialsProviderChain.java:117) at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3521) at com.amazonaws.services.s3.AmazonS3Client.headBucket(AmazonS3Client.java:1031) at com.amazonaws.services.s3.AmazonS3Client.doesBucketExist(AmazonS3Client.java:994) at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:297) ] Instead of putting the access and secret keys in hadoop's core-site.xml as above (with sudo emacs /usr/local/hadoop-2.7.6/etc/hadoop/core-site.xml) you'll want to put the Amazon AWS access key and secret key in the spark properties file: sudo emacs /usr/local/spark-2.3.0-bin-hadoop2.7/conf/spark-defaults.conf The spark properties conf file above should contain: spark.hadoop.fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem spark.hadoop.fs.s3a.access.key=PASTE_IAM-ROLE_ACCESSKEY spark.hadoop.fs.s3a.secret.key=PASTE_IAM-ROLE_SECRETKEY When the job is running, can visit the Spark Context at http://node1:4040/jobs/ (http://node1:4041/jobs/ for me first time, since I forwarded the vagrant VM's ports at +1. However, subsequent times it was on node1:4040/jobs?) ------------- APPJAR=target/cc-spark-0.2-SNAPSHOT-jar-with-dependencies.jar $SPARK_HOME/bin/spark-submit \ --conf spark.hadoop.parquet.enable.dictionary=true \ --conf spark.hadoop.parquet.enable.summary-metadata=false \ --conf spark.sql.hive.metastorePartitionPruning=true \ --conf spark.sql.parquet.filterPushdown=true \ --conf spark.sql.parquet.mergeSchema=true \ --class org.commoncrawl.spark.examples.CCIndexExport $APPJAR \ --query "SELECT url, warc_filename, warc_record_offset, warc_record_length FROM ccindex WHERE crawl = 'CC-MAIN-2019-30' AND subset = 'warc' AND content_languages LIKE '%mri%'" \ --outputFormat csv \ --numOutputPartitions 10 \ --outputCompression gzip \ s3://commoncrawl/cc-index/table/cc-main/warc/ \ hdfs:///user/vagrant/cc-mri-csv ---------------- Exception in thread "main" java.lang.RuntimeException: java.lang.ClassNotFoundException: Class org.apache.hadoop.fs.s3a.S3AFileSystem not found https://stackoverflow.com/questions/39355354/spark-no-filesystem-for-scheme-https-cannot-load-files-from-amazon-s3 https://stackoverflow.com/questions/33356041/technically-what-is-the-difference-between-s3n-s3a-and-s3 "2018-01-10 Update Hadoop 3.0 has cut its s3: and s3n implementations: s3a is all you get. It is now significantly better than its predecessor and performs as least as good as the Amazon implementation. Amazon's "s3:" is still offered by EMR, which is their closed source client. Consult the EMR docs for more info." 1. https://stackoverflow.com/questions/30385981/how-to-access-s3a-files-from-apache-spark "Having experienced first hand the difference between s3a and s3n - 7.9GB of data transferred on s3a was around ~7 minutes while 7.9GB of data on s3n took 73 minutes [us-east-1 to us-west-1 unfortunately in both cases; Redshift and Lambda being us-east-1 at this time] this is a very important piece of the stack to get correct and it's worth the frustration. Here are the key parts, as of December 2015: Your Spark cluster will need a Hadoop version 2.x or greater. If you use the Spark EC2 setup scripts and maybe missed it, the switch for using something other than 1.0 is to specify --hadoop-major-version 2 (which uses CDH 4.2 as of this writing). You'll need to include what may at first seem to be an out of date AWS SDK library (built in 2014 as version 1.7.4) for versions of Hadoop as late as 2.7.1 (stable): aws-java-sdk 1.7.4. As far as I can tell using this along with the specific AWS SDK JARs for 1.10.8 hasn't broken anything. You'll also need the hadoop-aws 2.7.1 JAR on the classpath. This JAR contains the class org.apache.hadoop.fs.s3a.S3AFileSystem. In spark.properties you probably want some settings that look like this: spark.hadoop.fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem spark.hadoop.fs.s3a.access.key=ACCESSKEY spark.hadoop.fs.s3a.secret.key=SECRETKEY I've detailed this list in more detail on a post I wrote as I worked my way through this process. In addition I've covered all the exception cases I hit along the way and what I believe to be the cause of each and how to fix them." 2. The classpath used by hadoop can be found by running the command (https://stackoverflow.com/questions/26748811/setting-external-jars-to-hadoop-classpath): hadoop classpath 3. Got hadoop-aws 2.7.6 jar from https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-aws/2.7.6 and put it into /home/vagrant 4. https://stackoverflow.com/questions/26748811/setting-external-jars-to-hadoop-classpath https://stackoverflow.com/questions/28520821/how-to-add-external-jar-to-hadoop-job/54459211#54459211 vagrant@node1:~$ export LIBJARS=/home/vagrant/hadoop-aws-2.7.6.jar vagrant@node1:~$ export HADOOP_CLASSPATH=`echo ${LIBJARS} | sed s/,/:/g` vagrant@node1:~$ hadoop classpath 5. https://community.cloudera.com/t5/Community-Articles/HDP-2-4-0-and-Spark-1-6-0-connecting-to-AWS-S3-buckets/ta-p/245760 "Download the aws sdk for java https://aws.amazon.com/sdk-for-java/ Uploaded it to the hadoop directory. You should see the aws-java-sdk-1.10.65.jar in /usr/hdp/2.4.0.0-169/hadoop/" I got version 1.11 [Can't find a spark.properties file, but this seems to contain spark specific properties: $SPARK_HOME/conf/spark-defaults.conf https://jaceklaskowski.gitbooks.io/mastering-apache-spark/spark-properties.html "The default Spark properties file is $SPARK_HOME/conf/spark-defaults.conf that could be overriden using spark-submit with the --properties-file command-line option."] Can SUDO COPY the 2 jar files hadoop-aws-2.7.6.jar and aws-java-sdk-1.11.616.jar to: /usr/local/hadoop/share/hadoop/common/ (else /usr/local/hadoop/share/hadoop/hdfs/hadoop-aws-2.7.6.jar) -------- schema https://commoncrawl.s3.amazonaws.com/cc-index/table/cc-main/index.html --------------- More examples to try: https://github.com/commoncrawl/cc-warc-examples A bit outdated? https://www.journaldev.com/20342/apache-spark-example-word-count-program-java https://www.journaldev.com/20261/apache-spark -------- sudo apt-get install maven (or sudo apt update sudo apt install maven) git clone https://github.com/commoncrawl/cc-index-table.git cd cc-index-table mvn package vagrant@node1:~/cc-index-table$ ./src/script/convert_url_index.sh https://commoncrawl.s3.amazonaws.com/cc-index/collections/CC-MAIN-2019-30/indexes/cdx-00000.gz hdfs:///user/vagrant/cc-index-table spark: https://jaceklaskowski.gitbooks.io/mastering-apache-spark/spark-shell.html ============ Dr Bainbridge found the following vagrant file that will set up hadoop and spark presumably for cluster computing: https://github.com/martinprobson/vagrant-hadoop-hive-spark Vagrant: * Guide: https://www.vagrantup.com/intro/getting-started/index.html * Common cmds: https://blog.ipswitch.com/5-vagrant-commands-you-need-to-know * vagrant reload = vagrant halt + vagrant up https://www.vagrantup.com/docs/cli/reload.html * https://stackoverflow.com/questions/46903623/how-to-use-firefox-ui-in-vagrant-box * https://stackoverflow.com/questions/22651399/how-to-install-firefox-in-precise64-vagrant-box sudo apt-get -y install firefox * vagrant install emacs: https://medium.com/@AnnaJS15/getting-started-with-virtualbox-and-vagrant-8d98aa271d2a * hadoop conf: sudo vi /usr/local/hadoop-2.7.6/etc/hadoop/mapred-site.xml * https://data-flair.training/forums/topic/mkdir-cannot-create-directory-data-name-node-is-in-safe-mode/ --- ==> node1: Forwarding ports... node1: 8080 (guest) => 8081 (host) (adapter 1) node1: 8088 (guest) => 8089 (host) (adapter 1) node1: 9083 (guest) => 9084 (host) (adapter 1) node1: 4040 (guest) => 4041 (host) (adapter 1) node1: 18888 (guest) => 18889 (host) (adapter 1) node1: 16010 (guest) => 16011 (host) (adapter 1) node1: 22 (guest) => 2200 (host) (adapter 1) ==> node1: Running 'pre-boot' VM customizations... ==> node1: Checking for guest additions in VM... node1: The guest additions on this VM do not match the installed version of node1: VirtualBox! In most cases this is fine, but in rare cases it can node1: prevent things such as shared folders from working properly. If you see node1: shared folder errors, please make sure the guest additions within the node1: virtual machine match the version of VirtualBox you have installed on node1: your host and reload your VM. node1: node1: Guest Additions Version: 5.1.38 node1: VirtualBox Version: 5.2 ------------