Deloitte Consulting LLP seeks a Consulting, Consultant in Chicago, IL and various unanticipated Deloitte office locations and client sites nationally.
Work You’ll Do
Provide advisory and implementation services of largescale data ecosystems, including data management, governance and the integration of structured and unstructured data to help companies unlock the value of big technology investments. Work across all phases of the Agile and Waterfall lifecycle for the design, development, testing and implementation of Master Data Management, Data Governance, Data Analytics and Data Integration solutions for enterprise-level clients. Provide technical recommendations for optimized data access and retention from various data warehouses. Implement data integration solutions using dimensional modeling, granularity and source-to-target mapping to integrate new BI/DW requirements. Perform data management, including master data, metadata, data architecture, data governance, data quality, and data modeling. Design database queries, triggers, procedures, functions and packages for reporting and data analytics. Design and develop data cleansing routines utilizing typical data quality functions, including standardization, transformation, rationalization, linking and matching. Implement data enrichment, lookup, filtering and data cleansing routines and solutions to improve data quality of the ingested data.
- 50% Travel required nationally.
- Telecommuting permitted.
Requirements
Bachelor’s degree or foreign equivalent in Business Administration, any STEM field, or a related field. Must have 1 year of related work experience in the job offered or in a related occupation. Position requires 1 year of related work experience in each of the following:
- Participating in various aspects of the full data delivery lifecycle for implementations, including requirements gathering, business process analysis, scripting, code deployments using Git, Jira, Agile software development, cloud infrastructure platforms (GCP, AWS), REST API, Containers (Kubernetes), and DevOps principles;
- Developing business solutions leveraging emerging automation, cognitive, and machine learning tools, including regression analysis, clustering, time-series forecasting and predictive modelling;
- Utilizing Big Data technologies, including Apache Spark-Pyspark, Hadoop Ecosystem (Pig, Hive), Apache Airflow to implement batch, micro-batch and streaming data processing architecture and develop tools, and services enabling end-user productivity and system integrations;
- Helping to provide industry insight and analytical support using data mining, pattern matching, data visualization, and predictive modelling using Python, Scikit-learn, TensorFlow, R, Databricks, and Jupyter notebooks;
- Designing and implementing reporting and visualization for unstructured and structured data sets utilizing Tableau dashboards and Python-R;
- Building reports and data models in spreadsheets based on performance measurement metrics, allowing for business and technical users to provide insights into the organization’s data; and
- Developing, testing, and automating Extract, Transform, and Load (ETL) procedures using SQL and BigQuery to perform data warehousing and data integration functions;
- EOE.