Design technical architecture and develop various Big Data technologies using AWS, Hadoop MapReduce, Pig, Hive, Sqoop and Flume. Orchestrate hundreds of HIVE queries using Oozie workflows and Hadoop Data serialization mechanisms like AVRO. Extract data from MySQL database into HDFS, HIVE and HBase using Sqoop. Perform data modeling and connect Cassandra from Spark and save summarized data frame to Cassandra. Develop different process Workflows using Apache NiFi to Extract, Transform and Load raw data into HDFS and then process it to Hive tables. Perform migration of ETLs from MS SQL server to Azure Cloud using Azure Data Factory and Databricks. Transform and analyze the data using Pyspark, HIVE, based on ETL mappings. Implement ETL process using Hive and Pig which includes Python and Java for cleansing the data. Will work in Manchester, CT and/or various client sites throughout the U.S. Must be willing to travel and/or relocate.
Mail Resume to HR Dept., Cyma Systems, Inc., 360 Tolland Turnpike, Suite 2D, Manchester, CT 06042.