SlideShare a Scribd company logo
1 of 3
Download to read offline
Yasar Ahmed Khan
Email:yasarkhan11@outlook.com
Contact NO - +917382635553
Professional Synopsis:
 Around 2+ years of experience on Hadoop developer, sql, java (core).
 Proficient in writing Pig and Hive and Hbase tools.
 Solid understanding of key concepts, such as Mapper and Reducer
 Data loading using hive to process the bulk data..
 Can work in group or individual environment and communicating with the
clients. Including Hands-on training to the customers.
 Good Functional Knowledge of Telecom Domain Analytically strongly
 I used to follow the Client requirements and resolve the issues in time.
 I have taken challenging on complex issues.
Professional Experience:
 Working as a Software Engineer in SUNPRO CYBER SYSTEMS PVT. LTD.
from June 2014 to till date.
Education Details:
• Bachelor of engineering from JNTUH University in the year of 2014.
Technical Skills:
Operating System : Windows Family, UNIX.
Programming languages : SQL, core java.
Tool : SQL Developer.
Project Details:
Project 2:
Organization : SUNPRO CYBER SYSTEMS PVT. LTD.
Project Name : Unify (CPOS)
Client : Vodafone India Services Pvt Ltd.
Duration : Nov’2015 to Till Date.
Technology : Apache Hadoop
Project description:
Vodafone India Services Pvt. Ltd. is a UK based organization and provides
telecommunication services in India. The project is for creating a centralized
Point of sale solution for a leading telecom client Vodafone. CPOS is a
centralized solution which shall replace existing multiple disparate POS solutions
based on different technology. They already had 3 different POS applications
implemented in different regions all over India.
Client wanted a new centralized POS for all regions and circles in India
the solution is based on service oriented architecture where the different parts
of the functionality shall be implemented as services, integration with other
applications shall be done via various integration techniques and process
implementation shall be done using Process server. The solution is divided into
three main functional modules as inventory, sales and customer registration
system (CRS). There are 3 DB’s like BSCS, STG(production), CPOS(pre-
production) for Accuracy of data while loading data into Pre-production is
loading is first load and delta load are ensure at the time unbar.
Roles and Responsibilities:
 Hands experience on mapreduce and hdfs.
 Worked on setting up pig, Hive on multiple nodes and developed using
Pig, Hive and MapReduce.
 Developed Map Reduce applications using Hadoop, Map Reduce
programming.
 I involved in developing the pig scripts.
 I implemented application by using mapreduce component for
performance.
 I used to import flat files and storing into hdfs.
 Monitoring the jobs like daily jobs, monthly jobs.
 I can exposure on partition and pig.
 I used to write commends to checking files and directories.
Technical Environment: hadoop, sql. Java, Toad.
Project 1:
Organization : SUNPRO CYBER SYSTEMS PVT. LTD..
Project Name : CRS (Customer Registration System)
Client : Vodafone India Services Pvt Ltd.
Duration : Nov’2014 to Oct’2015.
Technology : Apache Hadoop.
Project description:
CRS system is migration process from BSCS, EPOS,PPS to CPOS. And it is
worldwide telecom process for entire circles. It is essential to provide the
services to the customer and maintain the central point sale like repository. It
contains both prepaid and post paid services and MNP process of any
networking for any circle entire worldwide. It is purely Telecom Domain and
service based process.
For the services checking process in STG Database and PROD database. It has
been like First LOAD data and DELTA Load wise moved into PROD (CPOS).
Technical Environment: Sql, hive, pig, Hbase.
Responsibilities:
 I exposure of SQL, HIVE scripting.
 Worked on custom Map Reduce programs using Java.
 Designed and developed Pig data transformation scripts to work against
unstructured data from various data points and created a base line.
 Worked on creating and optimizing Hive Scripts for data analysts based on
the requirements.
 Good Experience in working with sequence file and compressed file
formats.
 Worked with performance issues and tuning the pig and Hive scripts.
 Writing the script files for processing data and loading to HDFS.
 Created and maintained Technical documentation for launching Hadoop
Clusters and for executing Hive and Pig scripts.
 I used to in involve improve the performance by using Mapreduce
component
 I used to store the data into HDFS and monitoring the jobs.

More Related Content

What's hot

(ATS6-DEV02) Web Application Strategies
(ATS6-DEV02) Web Application Strategies(ATS6-DEV02) Web Application Strategies
(ATS6-DEV02) Web Application StrategiesBIOVIA
 
H2O World - Survey of Available Machine Learning Frameworks - Brendan Herger
H2O World - Survey of Available Machine Learning Frameworks - Brendan HergerH2O World - Survey of Available Machine Learning Frameworks - Brendan Herger
H2O World - Survey of Available Machine Learning Frameworks - Brendan HergerSri Ambati
 
Make your PySpark Data Fly with Arrow!
Make your PySpark Data Fly with Arrow!Make your PySpark Data Fly with Arrow!
Make your PySpark Data Fly with Arrow!Databricks
 
Anshul bhatt profile jasper
Anshul bhatt profile  jasperAnshul bhatt profile  jasper
Anshul bhatt profile jasperAnshul Bhatt
 
Harshit Gupta-Project Engineer
Harshit Gupta-Project EngineerHarshit Gupta-Project Engineer
Harshit Gupta-Project EngineerHarshit Gupta
 
Apache Pulsar: The Next Generation Messaging and Queuing System
Apache Pulsar: The Next Generation Messaging and Queuing SystemApache Pulsar: The Next Generation Messaging and Queuing System
Apache Pulsar: The Next Generation Messaging and Queuing SystemDatabricks
 
Pariksha print publishing_competency
Pariksha print publishing_competencyPariksha print publishing_competency
Pariksha print publishing_competencyparikshalabs.com
 
Operationalizing AI at scale using MADlib Flow - Greenplum Summit 2019
Operationalizing AI at scale using MADlib Flow - Greenplum Summit 2019Operationalizing AI at scale using MADlib Flow - Greenplum Summit 2019
Operationalizing AI at scale using MADlib Flow - Greenplum Summit 2019VMware Tanzu
 
Informatica 5+years of experince
Informatica 5+years of experinceInformatica 5+years of experince
Informatica 5+years of experinceDharma Rao
 
Generative Hyperloop Design: Managing Massively Scaled Simulations Focused on...
Generative Hyperloop Design: Managing Massively Scaled Simulations Focused on...Generative Hyperloop Design: Managing Massively Scaled Simulations Focused on...
Generative Hyperloop Design: Managing Massively Scaled Simulations Focused on...Databricks
 
New Directions for Apache Arrow
New Directions for Apache ArrowNew Directions for Apache Arrow
New Directions for Apache ArrowWes McKinney
 
Benchmark Tests and How-Tos of Convolutional Neural Network on HorovodRunner ...
Benchmark Tests and How-Tos of Convolutional Neural Network on HorovodRunner ...Benchmark Tests and How-Tos of Convolutional Neural Network on HorovodRunner ...
Benchmark Tests and How-Tos of Convolutional Neural Network on HorovodRunner ...Databricks
 
Fugue: Unifying Spark and Non-Spark Ecosystems for Big Data Analytics
Fugue: Unifying Spark and Non-Spark Ecosystems for Big Data AnalyticsFugue: Unifying Spark and Non-Spark Ecosystems for Big Data Analytics
Fugue: Unifying Spark and Non-Spark Ecosystems for Big Data AnalyticsDatabricks
 

What's hot (18)

Mansi Khare
Mansi KhareMansi Khare
Mansi Khare
 
(ATS6-DEV02) Web Application Strategies
(ATS6-DEV02) Web Application Strategies(ATS6-DEV02) Web Application Strategies
(ATS6-DEV02) Web Application Strategies
 
H2O World - Survey of Available Machine Learning Frameworks - Brendan Herger
H2O World - Survey of Available Machine Learning Frameworks - Brendan HergerH2O World - Survey of Available Machine Learning Frameworks - Brendan Herger
H2O World - Survey of Available Machine Learning Frameworks - Brendan Herger
 
Make your PySpark Data Fly with Arrow!
Make your PySpark Data Fly with Arrow!Make your PySpark Data Fly with Arrow!
Make your PySpark Data Fly with Arrow!
 
Anshul bhatt profile jasper
Anshul bhatt profile  jasperAnshul bhatt profile  jasper
Anshul bhatt profile jasper
 
Harshit Gupta-Project Engineer
Harshit Gupta-Project EngineerHarshit Gupta-Project Engineer
Harshit Gupta-Project Engineer
 
Apache Pulsar: The Next Generation Messaging and Queuing System
Apache Pulsar: The Next Generation Messaging and Queuing SystemApache Pulsar: The Next Generation Messaging and Queuing System
Apache Pulsar: The Next Generation Messaging and Queuing System
 
CV LATEST
CV LATESTCV LATEST
CV LATEST
 
Pariksha print publishing_competency
Pariksha print publishing_competencyPariksha print publishing_competency
Pariksha print publishing_competency
 
BalaResume - Copy
BalaResume - CopyBalaResume - Copy
BalaResume - Copy
 
Operationalizing AI at scale using MADlib Flow - Greenplum Summit 2019
Operationalizing AI at scale using MADlib Flow - Greenplum Summit 2019Operationalizing AI at scale using MADlib Flow - Greenplum Summit 2019
Operationalizing AI at scale using MADlib Flow - Greenplum Summit 2019
 
Vetri_Selvi_CV
Vetri_Selvi_CVVetri_Selvi_CV
Vetri_Selvi_CV
 
Informatica 5+years of experince
Informatica 5+years of experinceInformatica 5+years of experince
Informatica 5+years of experince
 
Resume
ResumeResume
Resume
 
Generative Hyperloop Design: Managing Massively Scaled Simulations Focused on...
Generative Hyperloop Design: Managing Massively Scaled Simulations Focused on...Generative Hyperloop Design: Managing Massively Scaled Simulations Focused on...
Generative Hyperloop Design: Managing Massively Scaled Simulations Focused on...
 
New Directions for Apache Arrow
New Directions for Apache ArrowNew Directions for Apache Arrow
New Directions for Apache Arrow
 
Benchmark Tests and How-Tos of Convolutional Neural Network on HorovodRunner ...
Benchmark Tests and How-Tos of Convolutional Neural Network on HorovodRunner ...Benchmark Tests and How-Tos of Convolutional Neural Network on HorovodRunner ...
Benchmark Tests and How-Tos of Convolutional Neural Network on HorovodRunner ...
 
Fugue: Unifying Spark and Non-Spark Ecosystems for Big Data Analytics
Fugue: Unifying Spark and Non-Spark Ecosystems for Big Data AnalyticsFugue: Unifying Spark and Non-Spark Ecosystems for Big Data Analytics
Fugue: Unifying Spark and Non-Spark Ecosystems for Big Data Analytics
 

Similar to Yasar resume 2

Nagarjuna_Damarla
Nagarjuna_DamarlaNagarjuna_Damarla
Nagarjuna_DamarlaNag Arjun
 
Srikanth hadoop 3.6yrs_hyd
Srikanth hadoop 3.6yrs_hydSrikanth hadoop 3.6yrs_hyd
Srikanth hadoop 3.6yrs_hydsrikanth K
 
Bharath Hadoop Resume
Bharath Hadoop ResumeBharath Hadoop Resume
Bharath Hadoop ResumeBharath Kumar
 
Pankaj Resume for Hadoop,Java,J2EE - Outside World
Pankaj Resume for Hadoop,Java,J2EE -  Outside WorldPankaj Resume for Hadoop,Java,J2EE -  Outside World
Pankaj Resume for Hadoop,Java,J2EE - Outside WorldPankaj Kumar
 
Manikyam_Hadoop_5+Years
Manikyam_Hadoop_5+YearsManikyam_Hadoop_5+Years
Manikyam_Hadoop_5+YearsManikyam M
 
Nagarjuna_Damarla_Resume
Nagarjuna_Damarla_ResumeNagarjuna_Damarla_Resume
Nagarjuna_Damarla_ResumeNag Arjun
 
VAMSHI KRISHNA GADDAM IDRBT Experienced RESUME
VAMSHI KRISHNA GADDAM IDRBT Experienced RESUMEVAMSHI KRISHNA GADDAM IDRBT Experienced RESUME
VAMSHI KRISHNA GADDAM IDRBT Experienced RESUMEvamshi krishna
 
Kalyan Hadoop
Kalyan HadoopKalyan Hadoop
Kalyan HadoopCanarys
 
ANIRBAN_GHOSH_RESUME
ANIRBAN_GHOSH_RESUMEANIRBAN_GHOSH_RESUME
ANIRBAN_GHOSH_RESUMEAnirban Ghosh
 
Ammar_Resume_BigDataConsultant
Ammar_Resume_BigDataConsultantAmmar_Resume_BigDataConsultant
Ammar_Resume_BigDataConsultantAmmar Rizvi
 
Shiv shakti resume
Shiv shakti resumeShiv shakti resume
Shiv shakti resumeShiv Shakti
 

Similar to Yasar resume 2 (20)

Sourav banerjee resume
Sourav banerjee   resumeSourav banerjee   resume
Sourav banerjee resume
 
Nagarjuna_Damarla
Nagarjuna_DamarlaNagarjuna_Damarla
Nagarjuna_Damarla
 
HimaBindu
HimaBinduHimaBindu
HimaBindu
 
Srikanth hadoop 3.6yrs_hyd
Srikanth hadoop 3.6yrs_hydSrikanth hadoop 3.6yrs_hyd
Srikanth hadoop 3.6yrs_hyd
 
Bharath Hadoop Resume
Bharath Hadoop ResumeBharath Hadoop Resume
Bharath Hadoop Resume
 
Pankaj Resume for Hadoop,Java,J2EE - Outside World
Pankaj Resume for Hadoop,Java,J2EE -  Outside WorldPankaj Resume for Hadoop,Java,J2EE -  Outside World
Pankaj Resume for Hadoop,Java,J2EE - Outside World
 
Manikyam_Hadoop_5+Years
Manikyam_Hadoop_5+YearsManikyam_Hadoop_5+Years
Manikyam_Hadoop_5+Years
 
Nagarjuna_Damarla_Resume
Nagarjuna_Damarla_ResumeNagarjuna_Damarla_Resume
Nagarjuna_Damarla_Resume
 
VAMSHI KRISHNA GADDAM IDRBT Experienced RESUME
VAMSHI KRISHNA GADDAM IDRBT Experienced RESUMEVAMSHI KRISHNA GADDAM IDRBT Experienced RESUME
VAMSHI KRISHNA GADDAM IDRBT Experienced RESUME
 
Kalyan Hadoop
Kalyan HadoopKalyan Hadoop
Kalyan Hadoop
 
Prashanth Kumar_Hadoop_NEW
Prashanth Kumar_Hadoop_NEWPrashanth Kumar_Hadoop_NEW
Prashanth Kumar_Hadoop_NEW
 
Resume_VipinKP
Resume_VipinKPResume_VipinKP
Resume_VipinKP
 
PRAFUL_HADOOP
PRAFUL_HADOOPPRAFUL_HADOOP
PRAFUL_HADOOP
 
Resume
ResumeResume
Resume
 
hadoop resume
hadoop resumehadoop resume
hadoop resume
 
ANIRBAN_GHOSH_RESUME
ANIRBAN_GHOSH_RESUMEANIRBAN_GHOSH_RESUME
ANIRBAN_GHOSH_RESUME
 
Resume_Karthick
Resume_KarthickResume_Karthick
Resume_Karthick
 
Ammar_Resume_BigDataConsultant
Ammar_Resume_BigDataConsultantAmmar_Resume_BigDataConsultant
Ammar_Resume_BigDataConsultant
 
Resume
ResumeResume
Resume
 
Shiv shakti resume
Shiv shakti resumeShiv shakti resume
Shiv shakti resume
 

Yasar resume 2

  • 1. Yasar Ahmed Khan Email:yasarkhan11@outlook.com Contact NO - +917382635553 Professional Synopsis:  Around 2+ years of experience on Hadoop developer, sql, java (core).  Proficient in writing Pig and Hive and Hbase tools.  Solid understanding of key concepts, such as Mapper and Reducer  Data loading using hive to process the bulk data..  Can work in group or individual environment and communicating with the clients. Including Hands-on training to the customers.  Good Functional Knowledge of Telecom Domain Analytically strongly  I used to follow the Client requirements and resolve the issues in time.  I have taken challenging on complex issues. Professional Experience:  Working as a Software Engineer in SUNPRO CYBER SYSTEMS PVT. LTD. from June 2014 to till date. Education Details: • Bachelor of engineering from JNTUH University in the year of 2014. Technical Skills: Operating System : Windows Family, UNIX. Programming languages : SQL, core java. Tool : SQL Developer. Project Details: Project 2: Organization : SUNPRO CYBER SYSTEMS PVT. LTD. Project Name : Unify (CPOS) Client : Vodafone India Services Pvt Ltd. Duration : Nov’2015 to Till Date. Technology : Apache Hadoop Project description: Vodafone India Services Pvt. Ltd. is a UK based organization and provides telecommunication services in India. The project is for creating a centralized Point of sale solution for a leading telecom client Vodafone. CPOS is a centralized solution which shall replace existing multiple disparate POS solutions based on different technology. They already had 3 different POS applications implemented in different regions all over India.
  • 2. Client wanted a new centralized POS for all regions and circles in India the solution is based on service oriented architecture where the different parts of the functionality shall be implemented as services, integration with other applications shall be done via various integration techniques and process implementation shall be done using Process server. The solution is divided into three main functional modules as inventory, sales and customer registration system (CRS). There are 3 DB’s like BSCS, STG(production), CPOS(pre- production) for Accuracy of data while loading data into Pre-production is loading is first load and delta load are ensure at the time unbar. Roles and Responsibilities:  Hands experience on mapreduce and hdfs.  Worked on setting up pig, Hive on multiple nodes and developed using Pig, Hive and MapReduce.  Developed Map Reduce applications using Hadoop, Map Reduce programming.  I involved in developing the pig scripts.  I implemented application by using mapreduce component for performance.  I used to import flat files and storing into hdfs.  Monitoring the jobs like daily jobs, monthly jobs.  I can exposure on partition and pig.  I used to write commends to checking files and directories. Technical Environment: hadoop, sql. Java, Toad. Project 1: Organization : SUNPRO CYBER SYSTEMS PVT. LTD.. Project Name : CRS (Customer Registration System) Client : Vodafone India Services Pvt Ltd. Duration : Nov’2014 to Oct’2015. Technology : Apache Hadoop. Project description: CRS system is migration process from BSCS, EPOS,PPS to CPOS. And it is worldwide telecom process for entire circles. It is essential to provide the services to the customer and maintain the central point sale like repository. It contains both prepaid and post paid services and MNP process of any networking for any circle entire worldwide. It is purely Telecom Domain and service based process. For the services checking process in STG Database and PROD database. It has been like First LOAD data and DELTA Load wise moved into PROD (CPOS).
  • 3. Technical Environment: Sql, hive, pig, Hbase. Responsibilities:  I exposure of SQL, HIVE scripting.  Worked on custom Map Reduce programs using Java.  Designed and developed Pig data transformation scripts to work against unstructured data from various data points and created a base line.  Worked on creating and optimizing Hive Scripts for data analysts based on the requirements.  Good Experience in working with sequence file and compressed file formats.  Worked with performance issues and tuning the pig and Hive scripts.  Writing the script files for processing data and loading to HDFS.  Created and maintained Technical documentation for launching Hadoop Clusters and for executing Hive and Pig scripts.  I used to in involve improve the performance by using Mapreduce component  I used to store the data into HDFS and monitoring the jobs.