Sr. Software Engineer – Data Engineering (100% Remote Throughout US - OR New York OR Charlotte)
Job Description We are seeking an experienced Data Engineer with in-depth knowledge in data modelling tools and techniques to join our Enterprise Solutions Architecture team. In this role, you will be responsible for design, development of the AWS and Analytical solutions, and support the integration of cross-functional data that includes various enterprise business applications. The data modeler will support our software engineers, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout projects. The engineer will be part of a development team using an agile methodology to continuously deliver business value. The Role / Responsibilities:
- Participate in design sessions to understand business needs and functional / non-functional requirements, and apply appropriate patterns and practices.
- Design and develop data models across various functional areas and ensure enterprise data is structured for ease of use.
- Develop and maintain an integrated, subject-oriented, predominantly conceptual and logical data model that represents essential data produced and consumed across the enterprise for a domain.
- Strong relational and dimensional Data modeling and information classification expertise.
- Represent data modeling concerns in design, implementation and deployment reviews
- Lead proofs of concept and prototypes of leading and emerging data technologies.
- Account for solutions in the supported functional area by safeguarding the data model consistency, the data flows process structure and the data integrity.
- Develop, manage and update the data models, including physical and logical models of the data warehouse, data mart and staging area and sometimes the operational data store and data source systems.
- Review Data Models with Data SMEs, Data Modeler (peer reviews), Development teams, and architecture teams.
Deliver the data model implementation in the AWS with the project team.
- Candidates are expected to demonstrate experience in each of the above responsibilities and focus areas, with the following specific qualifications:
- BS in Computer Science, Mathematics, Business, Statistics or related technical field required.
- 5+ years as Data Engineer in AWS data modelling or related field with a track record of manipulating, processing, and extracting value from large datasets.
- Minimum of 5 years' experience with Logical Design demonstrated with large scale, multiple business solutions across divisions or enterprise.
- Experience in data modeling, harmonizing, cross functional integrations, optimizations, supporting enhancements, troubleshooting support issues.
- Experience with applied design techniques for optimizing models for operational, reporting, analysis and distribution/API data needs in AWS.
- Experience developing pipelines for ingestion of data from various sources.
- Experience working with AWS technologies (S3, Glue Crawler and ETL,ECS, Sagemaker,Redshift,RDS, Aurora PostgreSQL, SQS, EKS, CloudFormation) and computing needs(SQL,Lambda,Python, ).
- Experience with infrastructure as code platform (Terraform, Chef, etc.)
- Experience with containerization leveraging Docker abd CI/CD pipelines (Gitlab CI, CircleCI, Codefresh, etc.)
- Experience with web development, API's, and web services.
- Experience developing solutions with mix of structured, semi-structured and unstructured information.
- Strong analytical skills; ability to analyze raw data, draw conclusions, and develop actionable recommendations.
- Good communication and written skills.
- Organized, detail-oriented, QA-focused.
- Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets
- Knowledge of software engineering best practices across the development lifecycle, including agile methodologies, coding standards, code reviews, source management, build processes, testing, and operations
- Hands-on experience in writing complex, highly-optimized SQL queries across large data sets.
- Proficiency in one or more of the following languages - Python/PySpark, Java, R or similar.
- Industry Certification in AWS is a plus
Moody's is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, sex, gender, age, religion, national origin, citizen status, marital status, physical or mental disability, military or veteran status, sexual orientation, gender identity, gender expression, genetic information, or any other characteristic protected by law. Moody's also provides reasonable accommodation to qualified individuals with disabilities in accordance with applicable laws. If you need to inquire about a reasonable accommodation, or need assistance with completing the application process, please email firstname.lastname@example.org. This contact information is for accommodation requests only, and cannot be used to inquire about the status of applications.
For San Francisco positions, qualified applicants with criminal histories will be considered for employment consistent with the requirements of the San Francisco Fair Chance Ordinance. For New York City positions, qualified applicants with criminal histories will be considered for employment consistent with the requirements of the New York City Fair Chance Act. For all other applicants, qualified applicants with criminal histories will be considered for employment consistent with the requirements of applicable law.
Click here to view our full EEO policy statement. Click here for more information on your EEO rights under the law. Click here to view our Pay Transparency Nondiscrimination statement.
Candidates for Moody's Corporation may be asked to disclose securities holdings pursuant to Moody's Policy for Securities Trading and the requirements of the position. Employment is contingent upon compliance with the Policy, including remediation of positions in those holdings as necessary.