亚洲免费在线-亚洲免费在线播放-亚洲免费在线观看-亚洲免费在线观看视频-亚洲免费在线看-亚洲免费在线视频

MongoDB: Hadoop Integerateion 1

系統 2580 0

Hadoop and MongoDB Use Cases

The following are some example deployments with MongoDB and Hadoop. The goal is to provide a high-level description of how MongoDB and Hadoop can fit together in a typical Big Data stack. In each of the following examples MongoDB is used as the “operational” real-time data store and Hadoop is used for offline batch data processing and analysis.

Batch Aggregation

In several scenarios the built-in aggregation functionality provided by MongoDB is sufficient for analyzing your data. However in certain cases, significantly more complex data aggregation may be necessary. This is where Hadoop can provide a powerful framework for complex analytics.

In this scenario data is pulled from MongoDB and processed within Hadoop via one or more MapReduce jobs. Data may also be brought in from additional sources within these MapReduce jobs to develop a multi-datasource solution. Output from these MapReduce jobs can then be written back to MongoDB for later querying and ad-hoc analysis. Applications built on top of MongoDB can now use the information from the batch analytics to present to the end user or to drive other downstream features.


MongoDB: Hadoop Integerateion 1

Data Warehouse

In a typical production scenario, your application’s data may live in multiple datastores, each with their own query language and functionality. To reduce complexity in these scenarios, Hadoop can be used as a data warehouse and act as a centralized repository for data from the various sources.

In this situation, you could have periodic MapReduce jobs that load data from MongoDB into Hadoop. This could be in the form of “daily” or “weekly” data loads pulled from MongoDB via MapReduce. Once the data from MongoDB is available from within Hadoop, and data from other sources are also available, the larger dataset data can be queried against. Data analysts now have the option of using either MapReduce or Pig to create jobs that query the larger datasets that incorporate data from MongoDB.


MongoDB: Hadoop Integerateion 1

ETL Data

MongoDB may be the operational datastore for your application but there may also be other datastores that are holding your organization’s data. In this scenario it is useful to be able to move data from one datastore to another, either from your application’s data to another database or vice versa. Moving the data is much more complex than simply piping it from one mechanism to another, which is where Hadoop can be used.

In this scenario, Map-Reduce jobs are used to extract, transform and load data from one store to another. Hadoop can act as a complex ETL mechanism to migrate data in various forms via one or more MapReduce jobs that pull the data from one store, apply multiple transformations (applying new data layouts or other aggregation) and loading the data to another store. This approach can be used to move data from or to MongoDB, depending on the desired result.


MongoDB: Hadoop Integerateion 1

MongoDB: Hadoop Integerateion 1

MongoDB Connector for Hadoop

The MongoDB Connector for Hadoop is a plugin for Hadoop that provides the ability to use MongoDB as an input source and/or an output destination.

The source code is available on github where you can find a more comprehensive readme .

If you have questions please email the mongodb-user Mailing List . For any issues please file a ticket in Jira .

Installation

The MongoDB Connector for Hadoop uses Gradle tool for compilation. To build, simply invoke the jar task as seen with the following command:

            ./gradlew jar

          

The MongoDB Connector for Hadoop supports a number of Hadoop releases. You can change the Hadoop version supported by passing the hadoop_version parameter to gradle. For instance, to build against Apache Hadoop 2.2 use the following command:

            ./gradlew jar -Phadoop_version
            
              =
            
            2.2

          

After building, you will need to place the “core” jar and the mongo-java-driver in the lib directory of each Hadoop server.

For more complete install instructions please see the install instructions in the readme

?? ?

?

References

http://docs.mongodb.org/ecosystem/tools/hadoop/

http://docs.mongodb.org/ecosystem/use-cases/hadoop/

http://www.mongodb.com/press/integration-hadoop-and-mongodb-big-data%E2%80%99s-two-most-popular-technologies-gets-significant

?

MongoDB: Hadoop Integerateion 1


更多文章、技術交流、商務合作、聯系博主

微信掃碼或搜索:z360901061

微信掃一掃加我為好友

QQ號聯系: 360901061

您的支持是博主寫作最大的動力,如果您喜歡我的文章,感覺我的文章對您有幫助,請用微信掃描下面二維碼支持博主2元、5元、10元、20元等您想捐的金額吧,狠狠點擊下面給點支持吧,站長非常感激您!手機微信長按不能支付解決辦法:請將微信支付二維碼保存到相冊,切換到微信,然后點擊微信右上角掃一掃功能,選擇支付二維碼完成支付。

【本文對您有幫助就好】

您的支持是博主寫作最大的動力,如果您喜歡我的文章,感覺我的文章對您有幫助,請用微信掃描上面二維碼支持博主2元、5元、10元、自定義金額等您想捐的金額吧,站長會非常 感謝您的哦!!!

發表我的評論
最新評論 總共0條評論
主站蜘蛛池模板: 欧美视频成人 | 91视频一区二区 | 免费看爱爱视频 | 欧美一级特黄aaa大片 | 视频在线一区二区 | 亚洲精品一区二区伦理 | h片免费网站 | 精品视频在线免费 | 999热这里全都是精品 | 日日摸夜夜摸狠狠摸日日碰夜夜做 | 久久国产精品久久久久久 | 欧美日韩黄色大片 | 亚洲国产欧洲精品路线久久 | 免费澳门一级毛片 | 亚洲香蕉在线视频 | 成 人 黄 色 视频 免费观看 | 女bbbxxx毛片视频 | 狠狠色综合久久婷婷 | 女人18毛片a级18毛多水真多 | a在线观看免费视频 | 够爱久久| 咪咪色在线视频 | 综合久久影院 | 国产欧美一区二区三区沐欲 | 久青草视频在线播放 | 国产精品久久久久久久久久久久 | 99re热视频精品首页 | 欧美专区一区二区三区 | 国内精品久久久久久不卡影院 | 日本中文字幕在线精品 | 亚州视频一区二区 | 精品一区二区乱码久久乱码 | 天天拍拍天天爽免费视频 | 麻豆国产原创最新在线视频 | 波多野结衣久久精品免费播放 | 九九热在线免费 | 一道本不卡免费视频 | 日韩在线第二页 | a毛片在线播放 | 亚洲欧美日韩综合一区久久 | 91九色视频无限观看免费 |