1. Yasar Ahmed Khan
Email:yasarkhan11@outlook.com
Contact NO - +917382635553
Professional Synopsis:
Around 2+ years of experience on Hadoop developer, sql, java (core).
Proficient in writing Pig and Hive and Hbase tools.
Solid understanding of key concepts, such as Mapper and Reducer
Data loading using hive to process the bulk data..
Can work in group or individual environment and communicating with the
clients. Including Hands-on training to the customers.
Good Functional Knowledge of Telecom Domain Analytically strongly
I used to follow the Client requirements and resolve the issues in time.
I have taken challenging on complex issues.
Professional Experience:
Working as a Software Engineer in SUNPRO CYBER SYSTEMS PVT. LTD.
from June 2014 to till date.
Education Details:
• Bachelor of engineering from JNTUH University in the year of 2014.
Technical Skills:
Operating System : Windows Family, UNIX.
Programming languages : SQL, core java.
Tool : SQL Developer.
Project Details:
Project 2:
Organization : SUNPRO CYBER SYSTEMS PVT. LTD.
Project Name : Unify (CPOS)
Client : Vodafone India Services Pvt Ltd.
Duration : Nov’2015 to Till Date.
Technology : Apache Hadoop
Project description:
Vodafone India Services Pvt. Ltd. is a UK based organization and provides
telecommunication services in India. The project is for creating a centralized
Point of sale solution for a leading telecom client Vodafone. CPOS is a
centralized solution which shall replace existing multiple disparate POS solutions
based on different technology. They already had 3 different POS applications
implemented in different regions all over India.
2. Client wanted a new centralized POS for all regions and circles in India
the solution is based on service oriented architecture where the different parts
of the functionality shall be implemented as services, integration with other
applications shall be done via various integration techniques and process
implementation shall be done using Process server. The solution is divided into
three main functional modules as inventory, sales and customer registration
system (CRS). There are 3 DB’s like BSCS, STG(production), CPOS(pre-
production) for Accuracy of data while loading data into Pre-production is
loading is first load and delta load are ensure at the time unbar.
Roles and Responsibilities:
Hands experience on mapreduce and hdfs.
Worked on setting up pig, Hive on multiple nodes and developed using
Pig, Hive and MapReduce.
Developed Map Reduce applications using Hadoop, Map Reduce
programming.
I involved in developing the pig scripts.
I implemented application by using mapreduce component for
performance.
I used to import flat files and storing into hdfs.
Monitoring the jobs like daily jobs, monthly jobs.
I can exposure on partition and pig.
I used to write commends to checking files and directories.
Technical Environment: hadoop, sql. Java, Toad.
Project 1:
Organization : SUNPRO CYBER SYSTEMS PVT. LTD..
Project Name : CRS (Customer Registration System)
Client : Vodafone India Services Pvt Ltd.
Duration : Nov’2014 to Oct’2015.
Technology : Apache Hadoop.
Project description:
CRS system is migration process from BSCS, EPOS,PPS to CPOS. And it is
worldwide telecom process for entire circles. It is essential to provide the
services to the customer and maintain the central point sale like repository. It
contains both prepaid and post paid services and MNP process of any
networking for any circle entire worldwide. It is purely Telecom Domain and
service based process.
For the services checking process in STG Database and PROD database. It has
been like First LOAD data and DELTA Load wise moved into PROD (CPOS).
3. Technical Environment: Sql, hive, pig, Hbase.
Responsibilities:
I exposure of SQL, HIVE scripting.
Worked on custom Map Reduce programs using Java.
Designed and developed Pig data transformation scripts to work against
unstructured data from various data points and created a base line.
Worked on creating and optimizing Hive Scripts for data analysts based on
the requirements.
Good Experience in working with sequence file and compressed file
formats.
Worked with performance issues and tuning the pig and Hive scripts.
Writing the script files for processing data and loading to HDFS.
Created and maintained Technical documentation for launching Hadoop
Clusters and for executing Hive and Pig scripts.
I used to in involve improve the performance by using Mapreduce
component
I used to store the data into HDFS and monitoring the jobs.