The EY Data and Analytics team are specialists in information management, advanced analytics and business intelligence. We implement the information-driven strategies and systems that offer the highest return on investment, profitability, and service or policy outcomes for our clients.
Our consultants work to create a lasting organisational culture that encourages people to use information and technology more creatively and more intelligently to get better business results.
EY Data and Analytics is the data and advanced analytics capability within EY Asia-Pacific. We have vibrant practices in Australia, New Zealand, Singapore, Hong Kong, Korea, The Philippines and Malaysia.
EY Data and Analytics creates intelligent client organizations using data & advanced analytics. We go beyond strategy and provide end to end implementation of real life data environments and have some of the best architects, project managers, business analysts, data scientist, big data engineers, developers and consultants in the region.
Due to our continued growth we are looking for a talented, inquisitive and proactive Big Data IM join our team.
Bachelor degree and above in Analytics, Information Systems Management, Computer Science or related fields.
Hands on experience in implementing data integration processes, designing and developing data models(ER/Dimensional/Vault), designing, developing and building in detail ETL/ELT processes or programs.
Contributed in at least 2 phases of SDLC lifecycle and experience in Big Data, data warehouse, data analytics projects, data migration, change management process, and/or any IM (Information Management) related works.
Experience with Hadoop Technologies such as HDFS/MapRFS, Map Reduce(II), Advanced HDFS ACLS, Hive, HBase, Cassandra, Impala, Spark, Drill, Sentry, Sqoop, Flume, Kafka, Storm, Zookeeper and zkClient tool
Good understanding on Cloudera or Horton Works or MapR Hadoop Distribution with deep understanding of administration concepts
Experience in working with RDBMS technologies such as, Oracle, Microsoft SQL Server, PostgreSQL, DB2, MySQL etc. Experience in MPP database technologies such as Teradata
Hands-on experience on Spark, SparkSQL, Hive QL, Drill QL, Impala, Spark Data Frames and Flink CEP, Flink TableAPI&SQL as ETL framework
Hands-on programming skill on Scala/Python using Spark/Flink Framework
Strong knowledge of Big Data stream ingestion and IoT streaming using Flume, or Kafka, Storm, MQTT, RabbitMQ
Good understanding Spark Memory management with and without Yarn memory management
Should have basic understanding on Cloudera Manager or HortonWorks Ambari and MapR Control System
Should have experience developing and designing in one or more NoSQL database components and objects using Cassandra, Mongo, HBase, CouchDB/Couchbase, Elasticsearch
Should have experience developing and designing in one or more NoSQL database technologies such as Cassandra, Mongo, HBase, CouchDB/Couchbase, Elasticsearch etc.
Should good working knowledge of HCatalog and Hive Metadata.
Should have working knowledge of Kerberos authentication tool
Experience in commercial ETL tools like Talend, Informatica or Alteryx will be added advantage
Greenplum, IBM Pure Data etc. will be an added advantage
Experience in working with RDBMS technologies such as, Oracle, Microsoft SQL Server, PostgreSQL, DB2, MySQL etc. Experience in MPP database technologies such as Teradata, Greenplum, IBM Pure Data etc. will be an added advantage
Good knowledge of data warehouse and data management implementation methodology.
Good knowledge of the Information Management framework, including operating model, data governance, data management, data security, data quality and data architecture.
Knowledge and experience in data visualisation concepts using tools such as SAS Visual Analytics or WRS, Tableau, Microsoft PowerBI or Reporting Services, IBM Cognos, SAP Business Objects, etc. will be an advantage.
Ability to pick up new tools and able to be independent with minimal guidance from the project leads/managers.
Strong analytical and creative problem solving capabilities.
Ability to establish personal credibility quickly and demonstrate expertise.
Ability to create a positive learning culture, coach and develop team members
4 to 10 years of experience in data warehouse, data analytics projects, change management process, and/or any IM (Information Management) related works.
Delivered at least two (2) full SDLC lifecycle projects.
At least one of the industry or domain experiences in Banking/ Telecommunications/ Consulting
Preferably with experience in implementation best practices involving data management, data reconciliation, data duping, scheduling, etc.
Able to assess design considerations in the aspect of data management and integration
Experience with Agile/SCRUM/Kanban software implementation methodology
Should have good knowledge in DevOps engineering using Continuous Integration/Delivery tools such as Docker, Jenkins, Puppet, Chef, GitHub Atlassian Jira etc.
Certification in any of Hadoop Big Data tool/technology, data integration, data management, or visualisation tools is an added advantage.
Knowledge about the infrastructure paradigms such as OS, network etc. is an added advantage.
Internal Number: 5475131
About Ernst & Young Malaysia
eFinancialCareers is a career site specializing in financial services.