I need a Hadoop Architect and a Hadoop Data Engineer to work together for a 2 person Project.
They will work together on each objective below.
Position 1: Hadoop Architect
Position 2: Hadoop Data Engineer
Location: Minneapolis, MN
Duration: 6 months+
Resource Descriptions
Role | Description |
Architect |
|
Data Engineer |
|
Description of Services
The Services are broken down into a number of separate work streams as defined below. Each contains unique tasks and deliverables.
The work streams are:
- Architecture
- Installation
- Server Logs Use Case
Architecture
Resource | |
Architect (CONS-ARCH-HR-1) | |
Data Engineer (CONS-SRCONS-HR-1) |
Responsible for guidance and assistance in the architectural design of the Hortonworks Data Platform for the Customer deployment. With the Customer team, the Consultants will design and document the system architecture and operational expectations.
The major milestones and Deliverables for the Architecture work stream are:
- System Architecture Document
Cluster Installation
Resource | |
Architect (CONS-ARCH-HR-1) | |
Data Engineer (CONS-SRCONS-HR-1) |
|
Install the Hortonworks Data Platform in Customer's compute environment and provide expert guidance and perform the necessary configuration of the Hadoop cluster. All work executed will be performed side by side with the Customer staff in order to maximize knowledge transfer to Customer's organization.
The major milestones and Deliverables for the Architecture work stream are:
- Installed Cluster in Dev, CAT, Prod and DR environments
Server Logs Use Case
Resource | |
Architect (CONS-ARCH-HR-1) | |
Data Engineer (CONS-SRCONS-HR-1) |
Client would like to be able to stream in various server log files into Hadoop, and then perform near real-time searches, and other types of analytics on that data. mail processing operations span the globe to include overseas sites.
Client environment is generally understood to include:
• 1,000 Linux Servers, which store sys logs locally on disk
• 3,000 Database Servers (including Oracle, SQLServer and DB2 instances)
• 12,000 WebSphere "front end" Application Servers (4 in front of each DB Server)
• "N" Network Switches, which store their logs on NAS running on Linux
The major milestones and deliverables for this work stream are:
- Ingest mechanism for all data sources
- Ability to run ad-hoc queries against ingested data
Thanks & Regards,
Kuldeep Yadav
Sage Group Consulting, Inc.
3400 Highway 35, Suite # 9, Hazlet, NJ 07730
Email: kkyadav@sagetl.com
Direct: 732-856-5824
Work: 732-767-0010 Ext: 502
Fax: 732-767-0015
Website : www.sageci.com
Yahoo IM : kd.intime@ymail.com
G talk: kd.intime@gmail.com
No virus found in this message.
Checked by AVG - www.avg.com
Version: 2013.0.3485 / Virus Database: 4031/8445 - Release Date: 10/24/14
You received this message because you are subscribed to the Google Groups "Vu and Company" group.
To unsubscribe from this group and stop receiving emails from it, send an email to vu-and-company+unsubscribe@googlegroups.com.
Visit this group at http://groups.google.com/group/vu-and-company.
For more options, visit https://groups.google.com/d/optout.
No comments:
Post a Comment