- Expiry Date: 11 December 2021
Hiring Manager Notes:
What are the must-haves needed to be successful in this role?
A passion for, and expert understanding of database engineering concepts, including data acquisition, preparation, processing, governance, and wrangling of unstructured and structured datasets at scale.
Advanced knowledge and experience working with database architectures, technologies, and tools such as SQL, NoSQL, Big Data, Hive/Hadoop, Snowflake, Spark, R, Python, Excel.
Effective analytics and written and verbal communication skills. The candidate can effectively articulate
complex ideas, concepts, and designs to a wide variety of stakeholders and business partners.
Plus skills and experience include:
client platform (data analytics) development experience
Data warehouse architecture design and implementation
Experience working with and building AI/ML libraries and frameworks
What you will do in this role:
Gather, manage, process, prepare, marry, and wrangle large complex raw data sets for optimal data pipeline architectures aligned with functional and non-functional business requirements.
Support and scale infrastructure required for optimal Extraction, Transformation, Loading, and pipeline development from a wide variety of data sources using SQL, Python, and big-data technologies.
Identify and manage optimal data collection, governance, and sanitization processes according to their compliance levels across sensitive and non-sensitive boundaries.
Act as a liaison between capacity planning, data analytics teams, and big data engineering team to distill technical requirements.
Partnering with data scientists and data analytics leads to drive improvements in data quality, reliability, and efficiency.
Partner with CLIENT application developers to design and develop data applications to enhance capacity planning capabilities
To be successful in this role, it's ideal to have:
5+ years advanced knowledge working with a wide variety of RDMS systems featuring SQL, NoSQL, HDFS, Python, R, and related languages to develop, automate and manage data applications, transformations, and data pipelines at scale.
2-3 years experience with supporting Big Data solutions using processing frameworks such as Pig, Spark, and MapReduce
2 years working with infrastructure or cloud infrastructure, SASS, IAAS, FinTech, capacity planning, supply chain, or data analytics environments.
Working knowledge of stream processing, time series, data mining concept
Demonstrated understanding of data modeling, forecasting, exploratory, causal, and econometric data analysis
Bachelor’s degree in computer science, information technology, or equivalent experience in a related discipline
About ASK:ASK Consulting is an award-winning technology and professional services recruiting firm servicing Fortune 500 organizations nationally. With 5 nationwide offices, two global delivery centers, and employees in 42 states-ASK Consulting connects people with amazing opportunities
ASK Consulting is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all associates.