https://codereview.stackexchange.com/questions/198343/crawl-and-gather-all-the-urls-recursively-in-a-domain http://lucene.472066.n3.nabble.com/Using-nutch-just-for-the-crawler-fetcher-td611918.html https://www.quora.com/What-are-some-Web-crawler-tips-to-avoid-crawler-traps https://cwiki.apache.org/confluence/display/nutch/ https://cwiki.apache.org/confluence/display/NUTCH/Nutch2Crawling https://cwiki.apache.org/confluence/display/nutch/ReaddbOptions https://moz.com/top500 ----------- NUTCH ----------- https://stackoverflow.com/questions/35449673/nutch-and-solr-indexing-blacklist-domain https://nutch.apache.org/apidocs/apidocs-1.6/org/apache/nutch/urlfilter/domainblacklist/DomainBlacklistURLFilter.html https://lucene.472066.n3.nabble.com/blacklist-for-crawling-td618343.html https://lucene.472066.n3.nabble.com/Content-of-size-X-was-truncated-to-Y-td4003517.html Google: nutch mirror web site https://stackoverflow.com/questions/33354460/nutch-clone-website [https://stackoverflow.com/questions/35714897/nutch-not-crawling-entire-website fetch -all seems to be a nutch v2 thing?] Google (30 Sep): site mirroring with nutch https://grokbase.com/t/nutch/user/125sfbg0pt/using-nutch-for-web-site-mirroring https://lucene.472066.n3.nabble.com/Using-nutch-just-for-the-crawler-fetcher-td611918.html http://www.cs.ucy.ac.cy/courses/EPL660/lectures/lab6.pdf slide p.5 onwards crawler softw options: https://repositorio.iscte-iul.pt/bitstream/10071/2871/1/Building%20a%20Scalable%20Index%20and%20Web%20Search%20Engine%20for%20Music%20on.pdf See also p.20. HTTrack Google: nutch performance tuning * https://stackoverflow.com/questions/24383212/apache-nutch-performance-tuning-for-whole-web-crawling * https://stackoverflow.com/questions/4871972/how-to-speed-up-crawling-in-nutch * https://cwiki.apache.org/confluence/display/nutch/OptimizingCrawls NUTCH INSTALLATION: * Nutch v1: https://cwiki.apache.org/confluence/display/nutch/NutchTutorial#NutchTutorial-SetupSolrforsearch Nutch v2 installation and set up: * https://cwiki.apache.org/confluence/display/NUTCH/Nutch2Tutorial * https://subscription.packtpub.com/book/big_data_and_business_intelligence/9781783286850/1/ch01lvl1sec09/installing-and-configuring-apache-nutch Nutch doesn't work with spark (yet): https://stackoverflow.com/questions/29950299/distributed-web-crawling-using-apache-spark-is-it-possible SOLR: * Query syntax: http://www.solrtutorial.com/solr-query-syntax.html * Deleting a core: https://factorpad.com/tech/solr/reference/solr-delete.html * If you change a nutch 2 configuration, https://stackoverflow.com/questions/16401667/java-lang-classnotfoundexception-org-apache-gora-hbase-store-hbasestore explains you can rebuild nutch with: cd ant clean ant runtime ---------------------------------- Apache Nutch 2 with newer HBase hbase-common-1.4.8.jar 1. hbase jar files need to go into runtime/local/lib But not slf4j-log4j12-1.7.10.jar (there's already a slf4j-log4j12-1.7.5.jar) - so remove that from runtime/local/lib after copying it over. 2. https://stackoverflow.com/questions/46340416/how-to-compile-nutch-2-3-1-with-hbase-1-2-6 https://stackoverflow.com/questions/39834423/apache-nutch-fetcherjob-throws-nosuchelementexception-deep-in-gora/39837926#39837926 Unfortunately, the page https://paste.apache.org/jjqz referred to above that contains patches for using Gora 0.7 is no longer available. http://mail-archives.apache.org/mod_mbox/nutch-user/201602.mbox/%3C56B2EA23.8080801@cisinlabs.com%3E https://www.mail-archive.com/user@nutch.apache.org/msg14245.html ------------------------------------------------------------------------------ Other way: Nutch on its own vagrant with specified hbase or nutch with mongodb ------------------------------------------------------------------------------ * https://lobster1234.github.io/2017/08/14/search-with-nutch-mongodb-solr/ * https://waue0920.wordpress.com/2016/08/25/nutch-2-3-1-hbase-0-98-hadoop-2-5-solr-4-10-3/ The older but recommended hbase 0.98.21 for hadoop 2 can be downloaded from https://archive.apache.org/dist/hbase/0.98.21/ ----- HBASE commands /usr/local/hbase/bin/hbase shell https://learnhbase.net/2013/03/02/hbase-shell-commands/ http://dwgeek.com/read-hbase-table-using-hbase-shell-get-command.html/ dropping tables: https://www.tutorialspoint.com/hbase/hbase_drop_table.htm > list davidbHomePage_webpage is a table > get 'davidbHomePage_webpage', '1' Solution to get a working nutch2: get http://trac.greenstone.org/browser/gs3-extensions/maori-lang-detection/hdfs-cc-work/vagrant-for-nutch2.tar.gz And follow the instructions in my README file in there. --------------------------------------------------------------------- ALTERNATIVES TO NUTCH - looking for site mirroring capabilities --------------------------------------------------------------------- => https://anarc.at/services/archive/web/ Autistici's crawl [https://git.autistici.org/ale/crawl] needs Go: https://medium.com/better-programming/install-go-1-11-on-ubuntu-18-04-16-04-lts-8c098c503c5f https://guide.freecodecamp.org/go/installing-go/ubuntu-apt-get/ To uninstall: https://medium.com/@firebitsbr/how-to-uninstall-from-the-apt-manager-uninstall-just-golang-go-from-universe-debian-ubuntu-82d6a3692cbd https://tecadmin.net/install-go-on-ubuntu/ [our vagrant VMs are Ubuntu 16.04 LTS, as discovered by running the cmd "lsb_release -a"] https://alternativeto.net/software/apache-nutch/ https://alternativeto.net/software/wget/ https://github.com/ArchiveTeam/grab-site/blob/master/README.md#inspecting-warc-files-in-the-terminal https://github.com/ArchiveTeam/wpull ------------------- Running nutch 2.x ------------------- LINKS https://lucene.472066.n3.nabble.com/Nutch-2-x-readdb-command-dump-td4033937.html https://cwiki.apache.org/confluence/display/nutch/ReaddbOptions https://lobster1234.github.io/2017/08/14/search-with-nutch-mongodb-solr/ ## most useful for running nutch 2.x crawls https://www.mobomo.com/2017/06/the-basics-working-with-nutch-2-x/ "Fetch This is where the magic happens. During the fetch step, Nutch crawls the urls selected in the generate step. The most important argument you need is -threads: this sets the number of fetcher threads per task. Increasing this will make crawling faster, but setting it too high can overwhelm a site and it might shut out your crawler, as well as take up too much memory from your machine. Run it like this: $ nutch fetch -threads 50" https://examples.javacodegeeks.com/enterprise-java/apache-hadoop/apache-hadoop-nutch-tutorial/ https://www.yegor256.com/2019/04/17/nutch-from-java.html http://nutch.sourceforge.net/docs/en/tutorial.html Intranet: Configuration To configure things for intranet crawling you must: Create a flat file of root urls. For example, to crawl the nutch.org site you might start with a file named urls containing just the Nutch home page. All other Nutch pages should be reachable from this page. The urls file would thus look like: http://www.nutch.org/ Edit the file conf/crawl-urlfilter.txt and replace MY.DOMAIN.NAME with the name of the domain you wish to crawl. For example, if you wished to limit the crawl to the nutch.org domain, the line should read: +^http://([a-z0-9]*\.)*nutch.org/ This will include any url in the domain nutch.org. Intranet: Running the Crawl Once things are configured, running the crawl is easy. Just use the crawl command. Its options include: -dir dir names the directory to put the crawl in. -depth depth indicates the link depth from the root page that should be crawled. -delay delay determines the number of seconds between accesses to each host. -threads threads determines the number of threads that will fetch in parallel. For example, a typical call might be: bin/nutch crawl urls -dir crawl.test -depth 3 >& crawl.log Typically one starts testing one's configuration by crawling at low depths, and watching the output to check that desired pages are found. Once one is more confident of the configuration, then an appropriate depth for a full crawl is around 10. <=========== Once crawling has completed, one can skip to the Searching section below. ----------------------------------- Actually running nutch 2.x - steps ----------------------------------- MANUALLY GOING THROUGH THE CYCLE 3 TIMES: cd ~/apache-nutch-2.3.1/runtime/local ./bin/nutch inject urls ./bin/nutch generate -topN 50 ./bin/nutch fetch -all ./bin/nutch parse -all ./bin/nutch updatedb -all ./bin/nutch generate -topN 50 ./bin/nutch fetch -all ./bin/nutch parse -all ./bin/nutch updatedb -all ./bin/nutch generate -topN 50 ./bin/nutch fetch -all ./bin/nutch parse -all ./bin/nutch updatedb -all Dump output on local filesystem: rm -rf /tmp/bla ./bin/nutch readdb -dump /tmp/bla [-crawlId ID -text] less /tmp/bla/part-r-00000 To dump output on local filesystem: Need hdfs host name if sending/dumping nutch crawl output to a location on hdfs Host is defined in /usr/local/hadoop/etc/hadoop/core-site.xml for property fs.defaultFS, (https://stackoverflow.com/questions/27956973/java-io-ioexception-incomplete-hdfs-uri-no-host) host is hdfs://node2/ in this case. So: hdfs dfs -rmdir /user/vagrant/dump XXX ./bin/nutch readdb -dump user/vagrant/dump -text ### won't work XXX ./bin/nutch readdb -dump hdfs:///user/vagrant/dump -text ### won't work ./bin/nutch readdb -dump hdfs://node2/user/vagrant/dump -text USING THE SCRIPT TO ATTEMPT TO CRAWL A SITE * Choosing to repeat the cycle 10 times because, as per http://nutch.sourceforge.net/docs/en/tutorial.html "Typically one starts testing one's configuration by crawling at low depths, and watching the output to check that desired pages are found. Once one is more confident of the configuration, then an appropriate depth for a full crawl is around 10." * Use the ./bin/crawl script, provide the seed urls dir, the crawlId and number of times to repeat = 10 vagrant@node2:~/apache-nutch-2.3.1/runtime/local$ ./bin/crawl urls davidbHomePage 10 * View the downloaded crawls. This time need to provide crawlId to readdb, in order to get a dump of its text contents: hdfs dfs -rm -r hdfs://node2/user/vagrant/dump2 ./bin/nutch readdb -dump hdfs://node2/user/vagrant/dump2 -text -crawlId davidbHomePage * View the contents: hdfs dfs -cat hdfs://node2/user/vagrant/dump2/part-r-* * FIND OUT NUMBER OF URLS DOWNLOADED FOR THE SITE: vagrant@node2:~/apache-nutch-2.3.1/runtime/local$ ./bin/nutch readdb -stats -crawlId davidbHomePage WebTable statistics start Statistics for WebTable: retry 0: 44 status 5 (status_redir_perm): 4 status 3 (status_gone): 1 status 2 (status_fetched): 39 jobs: {[davidbHomePage]db_stats-job_local647846559_0001={jobName=[davidbHomePage]db_stats, jobID=job_local647846559_0001, counters={Map-Reduce Framework={MAP_OUTPUT_MATERIALIZED_BYTES=135, REDUCE_INPUT_RECORDS=8, SPILLED_RECORDS=16, MERGED_MAP_OUTPUTS=1, VIRTUAL_MEMORY_BYTES=0, MAP_INPUT_RECORDS=44, SPLIT_RAW_BYTES=935, FAILED_SHUFFLE=0, MAP_OUTPUT_BYTES=2332, REDUCE_SHUFFLE_BYTES=135, PHYSICAL_MEMORY_BYTES=0, GC_TIME_MILLIS=0, REDUCE_INPUT_GROUPS=8, COMBINE_OUTPUT_RECORDS=8, SHUFFLED_MAPS=1, REDUCE_OUTPUT_RECORDS=8, MAP_OUTPUT_RECORDS=176, COMBINE_INPUT_RECORDS=176, CPU_MILLISECONDS=0, COMMITTED_HEAP_BYTES=595591168}, File Input Format Counters ={BYTES_READ=0}, File System Counters={FILE_LARGE_READ_OPS=0, FILE_WRITE_OPS=0, FILE_READ_OPS=0, FILE_BYTES_WRITTEN=1788140, FILE_BYTES_READ=1223290}, File Output Format Counters ={BYTES_WRITTEN=275}, Shuffle Errors={CONNECTION=0, WRONG_LENGTH=0, BAD_ID=0, WRONG_MAP=0, WRONG_REDUCE=0, IO_ERROR=0}}}} TOTAL urls: 44 max score: 1.0 avg score: 0.022727273 min score: 0.0 WebTable statistics: done ------------------------------------ STOPPING CONDITION Seems inbuilt * When I tell it to cycle 15 times, it stops after 6 cycles saying no more URLs to fetch: vagrant@node2:~/apache-nutch-2.3.1/runtime/local$ ./bin/crawl urls davidbHomePage2 15 --- No SOLRURL specified. Skipping indexing. Injecting seed URLs ... Thu Oct 3 09:22:23 UTC 2019 : Iteration 6 of 15 Generating batchId Generating a new fetchlist ... Generating batchId Generating a new fetchlist /home/vagrant/apache-nutch-2.3.1/runtime/local/bin/nutch generate -D mapred.reduce.tasks=2 -D mapred.child.java.opts=-Xmx1000m -D mapred.reduce.tasks.speculative.execution=false -D mapred.map.tasks.speculative.execution=false -D mapred.compress.map.output=true -topN 50000 -noNorm -noFilter -adddays 0 -crawlId davidbHomePage2 -batchId 1570094569-27637 GeneratorJob: starting at 2019-10-03 09:22:49 GeneratorJob: Selecting best-scoring urls due for fetch. GeneratorJob: starting GeneratorJob: filtering: false GeneratorJob: normalizing: false GeneratorJob: topN: 50000 GeneratorJob: finished at 2019-10-03 09:22:52, time elapsed: 00:00:02 GeneratorJob: generated batch id: 1570094569-27637 containing 0 URLs Generate returned 1 (no new segments created) Escaping loop: no more URLs to fetch now vagrant@node2:~/apache-nutch-2.3.1/runtime/local$ --- * running readdb -stats show 44 URLs fetched, just as first time (when crawlId had been "davidbHomePage"): vagrant@node2:~/apache-nutch-2.3.1/runtime/local$ ./bin/nutch readdb -stats -crawlId davidbHomePage2 --- WebTable statistics start Statistics for WebTable: retry 0: 44 status 5 (status_redir_perm): 4 status 3 (status_gone): 1 status 2 (status_fetched): 39 jobs: {[davidbHomePage2]db_stats-job_local985519583_0001={jobName=[davidbHomePage2]db_stats, jobID=job_local985519583_0001, counters={Map-Reduce Framework={MAP_OUTPUT_MATERIALIZED_BYTES=135, REDUCE_INPUT_RECORDS=8, SPILLED_RECORDS=16, MERGED_MAP_OUTPUTS=1, VIRTUAL_MEMORY_BYTES=0, MAP_INPUT_RECORDS=44, SPLIT_RAW_BYTES=935, FAILED_SHUFFLE=0, MAP_OUTPUT_BYTES=2332, REDUCE_SHUFFLE_BYTES=135, PHYSICAL_MEMORY_BYTES=0, GC_TIME_MILLIS=4, REDUCE_INPUT_GROUPS=8, COMBINE_OUTPUT_RECORDS=8, SHUFFLED_MAPS=1, REDUCE_OUTPUT_RECORDS=8, MAP_OUTPUT_RECORDS=176, COMBINE_INPUT_RECORDS=176, CPU_MILLISECONDS=0, COMMITTED_HEAP_BYTES=552599552}, File Input Format Counters ={BYTES_READ=0}, File System Counters={FILE_LARGE_READ_OPS=0, FILE_WRITE_OPS=0, FILE_READ_OPS=0, FILE_BYTES_WRITTEN=1788152, FILE_BYTES_READ=1223290}, File Output Format Counters ={BYTES_WRITTEN=275}, Shuffle Errors={CONNECTION=0, WRONG_LENGTH=0, BAD_ID=0, WRONG_MAP=0, WRONG_REDUCE=0, IO_ERROR=0}}}} TOTAL urls: 44 --- ---------------------------------------------------------------------- Testing URLFilters: testing a URL to see if it's accepted ---------------------------------------------------------------------- Use the command ./bin/nutch org.apache.nutch.net.URLFilterChecker -allCombined (mentioned at https://lucene.472066.n3.nabble.com/Correct-syntax-for-regex-urlfilter-txt-trying-to-exclude-single-path-results-td3600376.html) Use as follows: cd apache-nutch-2.3.1/runtime/local ./bin/nutch org.apache.nutch.net.URLFilterChecker -allCombined Then paste the URL you want to test, press Enter. A + in front of response means accepted A - in front of response means rejected. Can continue pasting URLs to test against filters until you send Ctrl-D to terminate input. ------------------- Dr Nichols's suggestion: can store listing of potential product sites to inspect by checking url for /mi in combination with whether the domain's IP geolocates to OUTSIDE New Zealand (tld nz). * https://stackoverflow.com/questions/1415851/best-way-to-get-geo-location-in-java - https://mvnrepository.com/artifact/com.maxmind.geoip/geoip-api/1.2.10 - older .dat.gz file is archived at https://web.archive.org/web/20180917084618/http://geolite.maxmind.com/download/geoip/database/GeoLiteCity.dat.gz - and newer geo country data at https://dev.maxmind.com/geoip/geoip2/geolite2/ * https://dev.maxmind.com/geoip/geoip2/geolite2/ * older GeoIp API (has LookupService): https://github.com/maxmind/geoip-api-java * Newer GeoIp2 API: https://dev.maxmind.com/geoip/geoip2/downloadable/#MaxMind_APIs and https://maxmind.github.io/GeoIP2-java/doc/v2.12.0/ * https://maxmind.github.io/GeoIP2-java/ * https://github.com/AtlasOfLivingAustralia/ala-hub/issues/11 --- https://check-host.net/ip-info https://ipinfo.info/html/ip_checker.php ---------- MongoDB Installation: https://docs.mongodb.com/manual/tutorial/install-mongodb-on-ubuntu/ https://docs.mongodb.com/manual/administration/install-on-linux/ https://hevodata.com/blog/install-mongodb-on-ubuntu/ https://www.digitalocean.com/community/tutorials/how-to-install-mongodb-on-ubuntu-16-04 CENTOS (Analytics): https://tecadmin.net/install-mongodb-on-centos/ FROM SOURCE: https://github.com/mongodb/mongo/wiki/Build-Mongodb-From-Source GUI: https://robomongo.org/ Robomongo is Robo 3T now https://www.tutorialspoint.com/mongodb/mongodb_java.htm JAR FILE: http://central.maven.org/maven2/org/mongodb/mongo-java-driver/ https://mongodb.github.io/mongo-java-driver/ INSTALLING THE MONGODB SERVER AND MONGO CLIENT ON LINUX Need to have sudo and root powers. https://docs.mongodb.com/manual/tutorial/install-mongodb-on-ubuntu/ http://www.programmersought.com/article/6500308940/ 52 sudo apt-get install mongodb-clients 53 mongo 'mongodb://mongodb.cms.waikato.ac.nz:27017' -u anupama -p Failed with Error: HostAndPort: host is empty at src/mongo/shell/mongo.js:148 exception: connect failed This is due to a version incompatibility between Client and mongodb Server. The solution is to follow instructions at http://www.programmersought.com/article/6500308940/ and then https://docs.mongodb.com/manual/tutorial/install-mongodb-on-ubuntu/ as below: 54 sudo apt-get purge mongodb-clients 55 sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 9DA31620334BD75D9DCB49F368818C72E52529D4 56 echo "deb [ arch=amd64,arm64 ] https://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/4.0 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-4.0.list 57 sudo apt-get update 58 sudo apt-get install mongodb-clients 59 mongo 'mongodb://mongodb.cms.waikato.ac.nz:27017' -u anupama -p (still doesn't work) 60 sudo apt-get install -y mongodb-org The above ensures an up to date mongo client but installs the mongodb server too. Maybe this is the only step that is needed to install up-to-date mongo client and mongodb server? 72 sudo service mongod status 103 sudo service mongod start "mongod" stands for mongo-daemon. This runs the mongo db server listening for client connections 104 sudo service mongod status 88 sudo service mongod stop RUNNING AND USING THE MONGO CLIENT SHELL: Among the many things you can do with the Mongo client shell, one can use it to find the mongo client version (which is the version of the shell) and the mongo db version. To run the mongo client shell WITHOUT loading a db: wharariki:[880]/Scratch/ak19/gs3-extensions/maori-lang-detection>mongo --shell -nodb MongoDB shell version: 2.6.10 <<<<<<<<<-------------------<<<< MONGO CLIENT VERSION type "help" for help > help db.help() help on db methods db.mycoll.help() help on collection methods sh.help() sharding helpers rs.help() replica set helpers help admin administrative help help connect connecting to a db help help keys key shortcuts help misc misc things to know help mr mapreduce show dbs show database names show collections show collections in current database show users show users in current database show profile show most recent system.profile entries with time >= 1ms show logs show the accessible logger names show log [name] prints out the last segment of log in memory, 'global' is default use set current database db.foo.find() list objects in collection foo db.foo.find( { a : 1 } ) list objects in foo where a == 1 it result of the last line evaluated; use to further iterate DBQuery.shellBatchSize = x set default number of items to display on shell exit quit the mongo shell > help connect Normally one specifies the server on the mongo shell command line. Run mongo --help to see those options. Additional connections may be opened: var x = new Mongo('host[:port]'); var mydb = x.getDB('mydb'); or var mydb = connect('host[:port]/mydb'); Note: the REPL prompt only auto-reports getLastError() for the shell command line connection. Getting help on connect options: > var x = new Mongo('mongodb.cms.waikato.ac.nz:27017'); > var mydb = x.getDB('anupama'); > mydb.connect.help() DBCollection help db.connect.find().help() - show DBCursor help db.connect.count() db.connect.copyTo(newColl) - duplicates collection by copying all documents to newColl; no indexes are copied. db.connect.convertToCapped(maxBytes) - calls {convertToCapped:'connect', size:maxBytes}} command db.connect.dataSize() db.connect.distinct( key ) - e.g. db.connect.distinct( 'x' ) db.connect.drop() drop the collection db.connect.dropIndex(index) - e.g. db.connect.dropIndex( "indexName" ) or db.connect.dropIndex( { "indexKey" : 1 } ) db.connect.dropIndexes() db.connect.ensureIndex(keypattern[,options]) - options is an object with these possible fields: name, unique, dropDups db.connect.reIndex() db.connect.find([query],[fields]) - query is an optional query filter. fields is optional set of fields to return. e.g. db.connect.find( {x:77} , {name:1, x:1} ) db.connect.find(...).count() db.connect.find(...).limit(n) db.connect.find(...).skip(n) db.connect.find(...).sort(...) db.connect.findOne([query]) db.connect.findAndModify( { update : ... , remove : bool [, query: {}, sort: {}, 'new': false] } ) db.connect.getDB() get DB object associated with collection db.connect.getPlanCache() get query plan cache associated with collection db.connect.getIndexes() db.connect.group( { key : ..., initial: ..., reduce : ...[, cond: ...] } ) db.connect.insert(obj) db.connect.mapReduce( mapFunction , reduceFunction , ) db.connect.aggregate( [pipeline], ) - performs an aggregation on a collection; returns a cursor db.connect.remove(query) db.connect.renameCollection( newName , ) renames the collection. db.connect.runCommand( name , ) runs a db command with the given name where the first param is the collection name db.connect.save(obj) db.connect.stats() db.connect.storageSize() - includes free space allocated to this collection db.connect.totalIndexSize() - size in bytes of all the indexes db.connect.totalSize() - storage allocated for all data and indexes db.connect.update(query, object[, upsert_bool, multi_bool]) - instead of two flags, you can pass an object with fields: upsert, multi db.connect.validate( ) - SLOW db.connect.getShardVersion() - only for use with sharding db.connect.getShardDistribution() - prints statistics about data distribution in the cluster db.connect.getSplitKeysForChunks( ) - calculates split points over all chunks and returns splitter function db.connect.getWriteConcern() - returns the write concern used for any operations on this collection, inherited from server/db if set db.connect.setWriteConcern( ) - sets the write concern for writes to the collection db.connect.unsetWriteConcern( ) - unsets the write concern for writes to the collection > mydb.version() 4.0.13 <<<<<<<<<-------------------<<<< MONGODB SERVER VERSION (Check Mongo server version: https://stackoverflow.com/questions/38160412/how-to-find-the-exact-version-of-installed-mongodb) Finally we now know the mongodb server version: 4.0.13 This version didn't work with our mongo client (shell) version of 2.6.10. And that's we had to upgrade the client. INSTALLATION MONGO-DB AND CLIENT FROM: https://docs.mongodb.com/manual/tutorial/install-mongodb-on-ubuntu/ wget -qO - https://www.mongodb.org/static/pgp/server-4.2.asc | sudo apt-key add - echo "deb [ arch=amd64 ] https://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/4.2 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-4.2.list sudo apt-get update sudo apt-get install -y mongodb-org UNINSTALLING https://www.anintegratedworld.com/uninstall-mongodb-in-ubuntu-via-command-line-in-3-easy-steps/ MONGO DB ROBO 3T 1. Download "Double Pack" from https://robomongo.org/ 2. Untar its contents. Then untar the tarball in that. 3. Run: wharariki:[110]~/Downloads/robo3t-1.3.1-linux-x86_64-7419c406>./bin/robo3t