Data Engineer Remote Jobs

109 Results

+30d

Senior Data science Engineer - Remote

RapidSoft CorpReston, VA, Remote
agileDesignjavapython

RapidSoft Corp is hiring a Remote Senior Data science Engineer - Remote

Job Description

Duties and Responsibilities: • Develop data solutions in collaboration with other team members and software engineering teams that meet and anticipate business goals and strategies • Work with senior data science engineers in analyzing and understanding all aspects of data, including source, design, insight, technology and modeling • Develop and manage scalable data processing platforms for both exploratory and real-time analytics • Oversee and develop algorithms for quick data acquisition, analysis and evolution of the data model to improve search and recommendation engines. • Document and demonstrate solutions • Design system specifications and provide standards and best practices • Support and mentor junior data engineers by providing advice and coaching • Make informed decisions quickly and taking ownership of services and applications at scale • Be a persistent, creative problem solver, constantly striving to improve and iterate on both processes and technical solutions • Remain cool and effective in a crisis • Understand business needs and know how to create the tools to manage them • Take initiative, own the problem and own the solution • Other duties as assigned  Supervisory Responsibilities: • None  Minimum Qualifications: • Bachelor's Degree in Data Engineering, Computer Science, Information Technology, or a related discipline (or equivalent experience) • 8+ years experience in data engineering development • 5+ years experience working in object oriented programming languages such as Python or Java • Experience working in an Agile environment

Qualifications

See more jobs at RapidSoft Corp

Apply for this job

+30d

Senior Data Engineer

phDataIndia - Remote
scalasqlazurejavapythonAWS

phData is hiring a Remote Senior Data Engineer

Job Application for Senior Data Engineer at phData

See more jobs at phData

Apply for this job

+30d

Lead Data Engineer

phDataIndia - Remote
scalasqlazurejavapythonAWS

phData is hiring a Remote Lead Data Engineer

Job Application for Lead Data Engineer at phData

See more jobs at phData

Apply for this job

+30d

Senior Data Engineer

RemoteRemote-Southeast Asia
airflowsqljenkinspythonAWS

Remote is hiring a Remote Senior Data Engineer

About Remote

Remote is solving global remote organizations’ biggest challenge: employing anyone anywhere compliantly. We make it possible for businesses big and small to employ a global team by handling global payroll, benefits, taxes, and compliance. Check out remote.com/how-it-works to learn more or if you’re interested in adding to the mission, scroll down to apply now.

Please take a look at remote.com/handbook to learn more about our culture and what it is like to work here. Not only do we encourage folks from all ethnic groups, genders, sexuality, age and abilities to apply, but we prioritize a sense of belonging. You can check out independent reviews by other candidates on Glassdoor or look up the results of our candidate surveys to see how others feel about working and interviewing here.

All of our positions are fully remote. You do not have to relocate to join us!

What this job can offer you

This is an exciting time to join the growing Data Team at Remote, which today consists of over 15 Data Engineers, Analytics Engineers and Data Analysts spread across 10+ countries. Throughout the team we're focused on driving business value through impactful decision making. We're in a transformative period where we're laying the foundations for scalable company growth across our data platform, which truly serves every part of the Remote business. This team would be a great fit for anyone who loves working collaboratively on challenging data problems, and making an impact with their work. We're using a variety of modern data tooling on the AWS platform, such as Snowflake and dbt, with SQL and python being extensively employed.

This is an exciting time to join Remote and make a personal difference in the global employment space as a Senior Data Engineer, joining our Data team, composed of Data Analysts and Data Engineers. We support the decision making and operational reporting needs by being able to translate data into actionable insights to non-data professionals at Remote. We’re mainly using SQL, Python, Meltano, Airflow, Redshift, Metabase and Retool.

What you bring

  • Experience in data engineering; high-growth tech company experience is a plus
  • Strong experience with building data extraction/transformation pipelines (e.g. Meltano, Airbyte) and orchestration platforms (e.g. Airflow)
  • Strong experience in working with SQL, data warehouses (e.g. Redshift) and data transformation workflows (e.g. dbt)
  • Solid experience using CI/CD (e.g. Gitlab, Github, Jenkins)
  • Experience with data visualization tools (e.g. Metabase) is considered a plus
  • A self-starter mentality and the ability to thrive in an unstructured and fast-paced environment
  • You have strong collaboration skills and enjoy mentoring
  • You are a kind, empathetic, and patient person
  • Writes and speaks fluent English
  • It's not required to have experience working remotely, but considered a plus

Key Responsibilities

  • Playing a key role in Data Platform Development & Maintenance:
    • Managing and maintaining the organization's data platform, ensuring its stability, scalability, and performance.
    • Collaboration with cross-functional teams to understand their data requirements and optimize data storage and access, while protecting data integrity and privacy.
    • Development and testing architectures that enable data extraction and transformation to serve business needs.
  • Improving further our Data Pipeline & Monitoring Systems:
    • Designing, developing, and deploying efficient Extract, Load, Transform (ELT) processes to acquire and integrate data from various sources into the data platform.
    • Identifying, evaluating, and implementing tools and technologies to improve ELT pipeline performance and reliability.
    • Ensuring data quality and consistency by implementing data validation and cleansing techniques.
    • Implementing monitoring solutions to track the health and performance of data pipelines and identify and resolve issues proactively.
    • Conducting regular performance tuning and optimization of data pipelines to meet SLAs and scalability requirements.
  • Dig deep into DBT Modelling:
    • Designing, developing, and maintaining DBT (Data Build Tool) models for data transformation and analysis.
    • Collaboration with Data Analysts to understand their reporting and analysis needs and translate them into DBT models, making sure they respect internal conventions and best practices.
  • Driving our Culture of Documentation:
    • Creating and maintaining technical documentation, including data dictionaries, process flows, and architectural diagrams.
    • Collaborating with cross-functional teams, including Data Analysts, SREs (Site Reliability Engineers) and Software Engineers, to understand their data requirements and deliver effective data solutions.
    • Sharing knowledge and offer mentorship, providing guidance and advice to peers and colleagues, creating an environment that empowers collective growth

Practicals

  • You'll report to: Engineering Manager - Data
  • Team: Data 
  • Location:For this position we welcome everyone to apply, but we will prioritise applications from the following locations as we encourage our teams to diversify; Vietnam, Indonesia, Taiwan and South-Korea
  • Start date: As soon as possible

Remote Compensation Philosophy

Remote's Total Rewards philosophy is to ensure fair, unbiased compensation and fair equitypayalong with competitive benefits in all locations in which we operate. We do not agree to or encourage cheap-labor practices and therefore we ensure to pay above in-location rates. We hope to inspire other companies to support global talent-hiring and bring local wealth to developing countries.

At first glance our salary bands seem quite wide - here is some context. At Remote we have international operations and a globally distributed workforce.  We use geo ranges to consider geographic pay differentials as part of our global compensation strategy to remain competitive in various markets while we hiring globally.

The base salary range for this full-time position is $53,500 USD to $131,300 USD. Our salary ranges are determined by role, level and location, and our job titles may span more than one career level. The actual base pay for the successful candidate in this role is dependent upon many factors such as location, transferable or job-related skills, work experience, relevant training, business needs, and market demands. The base salary range may be subject to change.

Application process

  1. Interview with recruiter
  2. Interview with future manager
  3. Async exercise stage 
  4. Interview with team members

#LI-DP

Benefits

Our full benefits & perks are explained in our handbook at remote.com/r/benefits. As a global company, each country works differently, but some benefits/perks are for all Remoters:
  • work from anywhere
  • unlimited personal time off (minimum 4 weeks)
  • quarterly company-wide day off for self care
  • flexible working hours (we are async)
  • 16 weeks paid parental leave
  • mental health support services
  • stock options
  • learning budget
  • home office budget & IT equipment
  • budget for local in-person social events or co-working spaces

How you’ll plan your day (and life)

We work async at Remote which means you can plan your schedule around your life (and not around meetings). Read more at remote.com/async.

You will be empowered to take ownership and be proactive. When in doubt you will default to action instead of waiting. Your life-work balance is important and you will be encouraged to put yourself and your family first, and fit work around your needs.

If that sounds like something you want, apply now!

How to apply

  1. Please fill out the form below and upload your CV with a PDF format.
  2. We kindly ask you to submit your application and CV in English, as this is the standardised language we use here at Remote.
  3. If you don’t have an up to date CV but you are still interested in talking to us, please feel free to add a copy of your LinkedIn profile instead.

We will ask you to voluntarily tell us your pronouns at interview stage, and you will have the option to answer our anonymous demographic questionnaire when you apply below. As an equal employment opportunity employer it’s important to us that our workforce reflects people of all backgrounds, identities, and experiences and this data will help us to stay accountable. We thank you for providing this data, if you chose to.

See more jobs at Remote

Apply for this job

+30d

BI Data Engineer

CredibleRemote, United States
c++

Credible is hiring a Remote BI Data Engineer

Job Application for BI Data Engineer at Credible

See more jobs at Credible

Apply for this job

+30d

Senior Data Engineer

SmartMessageİstanbul, TR - Remote
MLS3SQSLambdaMaster’s DegreenosqlDesignmongodbazurepythonAWS

SmartMessage is hiring a Remote Senior Data Engineer

Who are we?

We are a globally expanding software technology company that helps brands communicate more effectively with their audiences. We are looking forward to expand our people capabilities and success in developing high-end solutions beyond existing boundaries and establish our brand as a Global Powerhouse.

We are free to work from wherever we want and go to the office whenever we like!!!

What is the role?

We are looking for a highly skilled and motivated Senior Data Engineer to join our dynamic team. The ideal candidate will have extensive experience in building and managing data pipelines, noSQL databases, and cloud-based data platforms. You will work closely with data scientists and other engineers to design and implement scalable data solutions.

Key Responsibilities:

  • Design, build, and maintain scalable data pipelines and architectures.
  • Implement data lake solutions on cloud platforms.
  • Develop and manage noSQL databases (e.g., MongoDB, Cassandra).
  • Work with graph databases (e.g., Neo4j) and big data technologies (e.g., Hadoop, Spark).
  • Utilize cloud services (e.g., S3, Redshift, Lambda, Kinesis, EMR, SQS, SNS).
  • Ensure data quality, integrity, and security.
  • Collaborate with data scientists to support machine learning and AI initiatives.
  • Optimize and tune data processing workflows for performance and scalability.
  • Stay up-to-date with the latest data engineering trends and technologies.

Detailed Responsibilities and Skills:

  • Business Objectives and Requirements:
    • Engage with business IT and data science teams to understand their needs and expectations from the data lake.
    • Define real-time analytics use cases and expected outcomes.
    • Establish data governance policies for data access, usage, and quality maintenance.
  • Technology Stack:
    • Real-time data ingestion using Apache Kafka or Amazon Kinesis.
    • Scalable storage solutions such as Amazon S3, Google Cloud Storage, or Hadoop Distributed File System (HDFS).
    • Real-time data processing using Apache Spark or Apache Flink.
    • NoSQL databases like Cassandra or MongoDB, and specialized time-series databases like InfluxDB.
  • Data Ingestion and Integration:
    • Set up data producers for real-time data streams.
    • Integrate batch data processes to merge with real-time data for comprehensive analytics.
    • Implement data quality checks during ingestion.
  • Data Processing and Management:
    • Utilize Spark Streaming or Flink for real-time data processing.
    • Enrich clickstream data by integrating with other data sources.
    • Organize data into partitions based on time or user attributes.
  • Data Lake Storage and Architecture:
    • Implement a multi-layered storage approach (raw, processed, and aggregated layers).
    • Use metadata repositories to manage data schemas and track data lineage.
  • Security and Compliance:
    • Implement fine-grained access controls.
    • Encrypt data in transit and at rest.
    • Maintain logs of data access and changes for compliance.
  • Monitoring and Maintenance:
    • Continuously monitor the performance of data pipelines.
    • Implement robust error handling and recovery mechanisms.
    • Monitor and optimize costs associated with storage and processing.
  • Continuous Improvement and Scalability:
    • Establish feedback mechanisms to improve data applications.
    • Design the architecture to scale horizontally.

Qualifications:

  • Bachelor’s or Master’s degree in Computer Science, Engineering, or related field.
  • 5+ years of experience in data engineering or related roles.
  • Proficiency in noSQL databases (e.g., MongoDB, Cassandra) and graph databases (e.g., Neo4j).
  • Strong experience with cloud platforms (e.g., AWS, GCP, Azure).
  • Hands-on experience with big data technologies (e.g., Hadoop, Spark).
  • Proficiency in Python and data processing frameworks.
  • Experience with Kafka, ClickHouse, Redshift.
  • Knowledge of ETL processes and data integration.
  • Familiarity with AI, ML algorithms, and neural networks.
  • Strong problem-solving skills and attention to detail.
  • Excellent communication and teamwork skills.
  • Entrepreneurial spirit and a passion for continuous learning.

Join our team!

See more jobs at SmartMessage

Apply for this job

+30d

Sr. Data Engineer, Marketing Tech

MLDevOPSLambdaagileairflowsqlDesignapic++dockerjenkinspythonAWSjavascript

hims & hers is hiring a Remote Sr. Data Engineer, Marketing Tech

Hims & Hers Health, Inc. (better known as Hims & Hers) is the leading health and wellness platform, on a mission to help the world feel great through the power of better health. We are revolutionizing telehealth for providers and their patients alike. Making personalized solutions accessible is of paramount importance to Hims & Hers and we are focused on continued innovation in this space. Hims & Hers offers nonprescription products and access to highly personalized prescription solutions for a variety of conditions related to mental health, sexual health, hair care, skincare, heart health, and more.

Hims & Hers is a public company, traded on the NYSE under the ticker symbol “HIMS”. To learn more about the brand and offerings, you can visit hims.com and forhers.com, or visit our investor site. For information on the company’s outstanding benefits, culture, and its talent-first flexible/remote work approach, see below and visit www.hims.com/careers-professionals.

We're looking for a savvy and experienced Senior Data Engineer to join the Data Platform Engineering team at Hims. As a Senior Data Engineer, you will work with the analytics engineers, product managers, engineers, security, DevOps, analytics, and machine learning teams to build a data platform that backs the self-service analytics, machine learning models, and data products serving Million+ Hims & Hers subscribers.

You Will:

  • Architect and develop data pipelines to optimize performance, quality, and scalability.
  • Build, maintain & operate scalable, performant, and containerized infrastructure required for optimal extraction, transformation, and loading of data from various data sources.
  • Design, develop, and own robust, scalable data processing and data integration pipelines using Python, dbt, Kafka, Airflow, PySpark, SparkSQL, and REST API endpoints to ingest data from various external data sources to Data Lake.
  • Develop testing frameworks and monitoring to improve data quality, observability, pipeline reliability, and performance 
  • Orchestrate sophisticated data flow patterns across a variety of disparate tooling.
  • Support analytics engineers, data analysts, and business partners in building tools and data marts that enable self-service analytics.
  • Partner with the rest of the Data Platform team to set best practices and ensure the execution of them.
  • Partner with the analytics engineers to ensure the performance and reliability of our data sources.
  • Partner with machine learning engineers to deploy predictive models.
  • Partner with the legal and security teams to build frameworks and implement data compliance and security policies.
  • Partner with DevOps to build IaC and CI/CD pipelines.
  • Support code versioning and code deployments for data Pipelines.

You Have:

  • 8+ years of professional experience designing, creating and maintaining scalable data pipelines using Python, API calls, SQL, and scripting languages.
  • Demonstrated experience writing clean, efficient & well-documented Python code and are willing to become effective in other languages as needed.
  • Demonstrated experience writing complex, highly optimized SQL queries across large data sets.
  • Experience working with customer behavior data. 
  • Experience with Javascript, event tracking tools like GTM, tools like Google Analytics, Amplitude and CRM tools. 
  • Experience with cloud technologies such as AWS and/or Google Cloud Platform.
  • Experience with serverless architecture (Google Cloud Functions, AWS Lambda).
  • Experience with IaC technologies like Terraform.
  • Experience with data warehouses like BigQuery, Databricks, Snowflake, and Postgres.
  • Experience building event streaming pipelines using Kafka/Confluent Kafka.
  • Experience with modern data stack like Airflow/Astronomer, Fivetran, Tableau/Looker.
  • Experience with containers and container orchestration tools such as Docker or Kubernetes.
  • Experience with Machine Learning & MLOps.
  • Experience with CI/CD (Jenkins, GitHub Actions, Circle CI).
  • Thorough understanding of SDLC and Agile frameworks.
  • Project management skills and a demonstrated ability to work autonomously.

Nice to Have:

  • Experience building data models using dbt
  • Experience designing and developing systems with desired SLAs and data quality metrics.
  • Experience with microservice architecture.
  • Experience architecting an enterprise-grade data platform.

Outlined below is a reasonable estimate of H&H’s compensation range for this role for US-based candidates. If you're based outside of the US, your recruiter will be able to provide you with an estimated salary range for your location.

The actual amount will take into account a range of factors that are considered in making compensation decisions including but not limited to skill sets, experience and training, licensure and certifications, and location. H&H also offers a comprehensive Total Rewards package that may include an equity grant.

Consult with your Recruiter during any potential screening to determine a more targeted range based on location and job-related factors.

An estimate of the current salary range for US-based employees is
$140,000$170,000 USD

We are focused on building a diverse and inclusive workforce. If you’re excited about this role, but do not meet 100% of the qualifications listed above, we encourage you to apply.

Hims is an Equal Opportunity Employer and considers applicants for employment without regard to race, color, religion, sex, orientation, national origin, age, disability, genetics or any other basis forbidden under federal, state, or local law. Hims considers all qualified applicants in accordance with the San Francisco Fair Chance Ordinance.

Hims & hers is committed to providing reasonable accommodations for qualified individuals with disabilities and disabled veterans in our job application procedures. If you need assistance or an accommodation due to a disability, you may contact us at accommodations@forhims.com. Please do not send resumes to this email address.

For our California-based applicants – Please see our California Employment Candidate Privacy Policy to learn more about how we collect, use, retain, and disclose Personal Information. 

See more jobs at hims & hers

Apply for this job

+30d

Sr. Data Engineer, Kafka

DevOPSagileterraformairflowpostgressqlDesignapic++dockerkubernetesjenkinspythonAWSjavascript

hims & hers is hiring a Remote Sr. Data Engineer, Kafka

Hims & Hers Health, Inc. (better known as Hims & Hers) is the leading health and wellness platform, on a mission to help the world feel great through the power of better health. We are revolutionizing telehealth for providers and their patients alike. Making personalized solutions accessible is of paramount importance to Hims & Hers and we are focused on continued innovation in this space. Hims & Hers offers nonprescription products and access to highly personalized prescription solutions for a variety of conditions related to mental health, sexual health, hair care, skincare, heart health, and more.

Hims & Hers is a public company, traded on the NYSE under the ticker symbol “HIMS”. To learn more about the brand and offerings, you can visit hims.com and forhers.com, or visit our investor site. For information on the company’s outstanding benefits, culture, and its talent-first flexible/remote work approach, see below and visit www.hims.com/careers-professionals.

We're looking for a savvy and experienced Senior Data Engineer to join the Data Platform Engineering team at Hims. As a Senior Data Engineer, you will work with the analytics engineers, product managers, engineers, security, DevOps, analytics, and machine learning teams to build a data platform that backs the self-service analytics, machine learning models, and data products serving over a million Hims & Hers users.

You Will:

  • Architect and develop data pipelines to optimize performance, quality, and scalability
  • Build, maintain & operate scalable, performant, and containerized infrastructure required for optimal extraction, transformation, and loading of data from various data sources
  • Design, develop, and own robust, scalable data processing and data integration pipelines using Python, dbt, Kafka, Airflow, PySpark, SparkSQL, and REST API endpoints to ingest data from various external data sources to Data Lake
  • Develop testing frameworks and monitoring to improve data quality, observability, pipeline reliability, and performance
  • Orchestrate sophisticated data flow patterns across a variety of disparate tooling
  • Support analytics engineers, data analysts, and business partners in building tools and data marts that enable self-service analytics
  • Partner with the rest of the Data Platform team to set best practices and ensure the execution of them
  • Partner with the analytics engineers to ensure the performance and reliability of our data sources
  • Partner with machine learning engineers to deploy predictive models
  • Partner with the legal and security teams to build frameworks and implement data compliance and security policies
  • Partner with DevOps to build IaC and CI/CD pipelines
  • Support code versioning and code deployments for data Pipelines

You Have:

  • 8+ years of professional experience designing, creating and maintaining scalable data pipelines using Python, API calls, SQL, and scripting languages
  • Demonstrated experience writing clean, efficient & well-documented Python code and are willing to become effective in other languages as needed
  • Demonstrated experience writing complex, highly optimized SQL queries across large data sets
  • Experience with cloud technologies such as AWS and/or Google Cloud Platform
  • Experience building event streaming pipelines using Kafka/Confluent Kafka
  • Experience with IaC technologies like Terraform
  • Experience with data warehouses like BigQuery, Databricks, Snowflake, and Postgres
  • Experience with Databricks platform
  • Experience with modern data stack like Airflow/Astronomer, Databricks, dbt, Fivetran, Confluent, Tableau/Looker
  • Experience with containers and container orchestration tools such as Docker or Kubernetes
  • Experience with Machine Learning & MLOps
  • Experience with CI/CD (Jenkins, GitHub Actions, Circle CI)
  • Thorough understanding of SDLC and Agile frameworks
  • Project management skills and a demonstrated ability to work autonomously

Nice to Have:

  • Experience building data models using dbt
  • Experience with Javascript and event tracking tools like GTM
  • Experience designing and developing systems with desired SLAs and data quality metrics
  • Experience with microservice architecture
  • Experience architecting an enterprise-grade data platform

Outlined below is a reasonable estimate of H&H’s compensation range for this role for US-based candidates. If you're based outside of the US, your recruiter will be able to provide you with an estimated salary range for your location.

The actual amount will take into account a range of factors that are considered in making compensation decisions including but not limited to skill sets, experience and training, licensure and certifications, and location. H&H also offers a comprehensive Total Rewards package that may include an equity grant.

Consult with your Recruiter during any potential screening to determine a more targeted range based on location and job-related factors.

An estimate of the current salary range for US-based employees is
$140,000$170,000 USD

We are focused on building a diverse and inclusive workforce. If you’re excited about this role, but do not meet 100% of the qualifications listed above, we encourage you to apply.

Hims is an Equal Opportunity Employer and considers applicants for employment without regard to race, color, religion, sex, orientation, national origin, age, disability, genetics or any other basis forbidden under federal, state, or local law. Hims considers all qualified applicants in accordance with the San Francisco Fair Chance Ordinance.

Hims & hers is committed to providing reasonable accommodations for qualified individuals with disabilities and disabled veterans in our job application procedures. If you need assistance or an accommodation due to a disability, you may contact us at accommodations@forhims.com. Please do not send resumes to this email address.

For our California-based applicants – Please see our California Employment Candidate Privacy Policy to learn more about how we collect, use, retain, and disclose Personal Information. 

See more jobs at hims & hers

Apply for this job

EXUS is hiring a Remote Junior/Mid Data Analytics Engineer

EXUS is an enterprise software company, founded in 1989 with the vision to simplify risk management software. EXUS launched its Financial Suite (EFS) in 2003 to support financial entities worldwide to improve their results. Today, our EXUS Financial Suite (EFS) is trusted by risk professionals in more than 32 countries worldwide (MENAEUSEA). We introduce simplicity and intelligence in their business processes through technology, improving their collections performance.

Our people constitute the source of inspiration that drives us forward and helps us fulfill our purpose of being role models for a better world.
This is your chance to be part of a highly motivated, diverse, and multidisciplinary team, which embraces breakthrough thinking and technology to create software that serves people.

Our shared Values:

  • We are transparent and direct
  • We are positive and fun, never cynical or sarcastic
  • We are eager to learn and explore
  • We put the greater good first
  • We are frugal and we do not waste resources
  • We are fanatically disciplined, we deliver on our promises

We are EXUS! Are you?

Join our dynamic Data Analytics Teamas we expand our capabilities into data Lakehouse architecture. We are seeking a Junior/Mid Data Analytics Engineer who is enthusiastic about creating compelling data visualizations, effectively communicating them with customers, conducting training sessions, and gaining experience in managing ETL processes for big data.

Key Responsibilities:

  • Develop and maintain reports and dashboards using leading visualization tools, and craft advanced SQL queries for additional report generation.
  • Deliver training sessions on our Analytic Solution and effectively communicate findings and insights to both technical and non-technical customer audiences.
  • Collaborate with business stakeholders to gather and analyze requirements.
  • Debug issues in the front-end analytic tool, investigate underlying causes, and resolve these issues.
  • Monitor and maintain ETL processes as part of our transition to a data lakehouse architecture.
  • Proactively investigate and implement new data analytics technologies and methods.

Required Skills and Qualifications:

  • A BSc or MSc degree in Computer Science, Engineering, or a related field.
  • 1-5 years of experience with data visualization tools and techniques. Knowledge of MicroStrategy and Apache Superset is a plus.
  • 1-5 years of experience with Data Warehouses, Big Data, and/or Cloud technologies. Exposure to these areas in academic projects, internships, or entry-level roles is also acceptable.
  • Familiarity with PL/SQL and practical experience with SQL for data manipulation and analysis. Hands-on experience through academic coursework, personal projects, or job experience is valued.
  • Familiarity with data Lakehouse architecture.
  • Excellent analytical skills to understand business needs and translate them into data models.
  • Organizational skills with the ability to document work clearly and communicate it professionally.
  • Ability to independently investigate new technologies and solutions.
  • Strong communication skills, capable of conducting presentations and engaging effectively with customers in English.
  • Demonstrated ability to work collaboratively in a team environment.
  • Competitive salary
  • Friendly, pleasant, and creative working environment
  • Remote Working
  • Development Opportunities
  • Private Health Insurance Allowance

Privacy Notice for Job Applications: https://www.exus.co.uk/en/careers/privacy-notice-f...

See more jobs at EXUS

Apply for this job

+30d

Junior/Mid Data Analytics Engineer

EXUSBucharest,Romania, Remote

EXUS is hiring a Remote Junior/Mid Data Analytics Engineer

EXUS is an enterprise software company, founded in 1989 with the vision to simplify risk management software. EXUS launched its Financial Suite (EFS) in 2003 to support financial entities worldwide to improve their results. Today, our EXUS Financial Suite (EFS) is trusted by risk professionals in more than 32 countries worldwide (MENAEUSEA). We introduce simplicity and intelligence in their business processes through technology, improving their collections performance.

Our people constitute the source of inspiration that drives us forward and helps us fulfill our purpose of being role models for a better world.
This is your chance to be part of a highly motivated, diverse, and multidisciplinary team, which embraces breakthrough thinking and technology to create software that serves people.

Our shared Values:

  • We are transparent and direct
  • We are positive and fun, never cynical or sarcastic
  • We are eager to learn and explore
  • We put the greater good first
  • We are frugal and we do not waste resources
  • We are fanatically disciplined, we deliver on our promises

We are EXUS! Are you?

Join our dynamic Data Analytics Teamas we expand our capabilities into data Lakehouse architecture. We are seeking a Junior/Mid Data Analytics Engineer who is enthusiastic about creating compelling data visualizations, effectively communicating them with customers, conducting training sessions, and gaining experience in managing ETL processes for big data.

Key Responsibilities:

  • Develop and maintain reports and dashboards using leading visualization tools, and craft advanced SQL queries for additional report generation.
  • Deliver training sessions on our Analytic Solution and effectively communicate findings and insights to both technical and non-technical customer audiences.
  • Collaborate with business stakeholders to gather and analyze requirements.
  • Debug issues in the front-end analytic tool, investigate underlying causes, and resolve these issues.
  • Monitor and maintain ETL processes as part of our transition to a data lakehouse architecture.
  • Proactively investigate and implement new data analytics technologies and methods.

Required Skills and Qualifications:

  • A BSc or MSc degree in Computer Science, Engineering, or a related field.
  • 1-5 years of experience with data visualization tools and techniques. Knowledge of MicroStrategy and Apache Superset is a plus.
  • 1-5 years of experience with Data Warehouses, Big Data, and/or Cloud technologies. Exposure to these areas in academic projects, internships, or entry-level roles is also acceptable.
  • Familiarity with PL/SQL and practical experience with SQL for data manipulation and analysis. Hands-on experience through academic coursework, personal projects, or job experience is valued.
  • Familiarity with data Lakehouse architecture.
  • Excellent analytical skills to understand business needs and translate them into data models.
  • Organizational skills with the ability to document work clearly and communicate it professionally.
  • Ability to independently investigate new technologies and solutions.
  • Strong communication skills, capable of conducting presentations and engaging effectively with customers in English.
  • Demonstrated ability to work collaboratively in a team environment.
  • Competitive salary
  • Friendly, pleasant, and creative working environment
  • Remote Working
  • Development Opportunities
  • Private Health Insurance Allowance

Privacy Notice for Job Applications: https://www.exus.co.uk/en/careers/privacy-notice-f...

See more jobs at EXUS

Apply for this job

+30d

Sr Data Engineer

VeriskJersey City, NJ, Remote
LambdasqlDesignlinuxpythonAWS

Verisk is hiring a Remote Sr Data Engineer

Job Description

We are looking for a savvy Data Engineer to join our growing team of analytics experts. The hire will be responsible for expanding and optimizing our data pipeline architecture. The ideal candidate is an experienced data pipeline builder and data wrangler with strong experience in handling data at scale. The Data Engineer will support our software developers, data analysts and data scientists on various data initiatives.

This is a remote role that can be done anywhere in the continental US; work is on Eastern time zone hours.

Why this role

This is a highly visible role within the enterprise data lake team. Working within our Data group and business analysts, you will be responsible for leading creation of data architecture that produces our data assets to enable our data platform.  This role requires working closely with business leaders, architects, engineers, data scientists and wide range of stakeholders throughout the organization to build and execute our strategic data architecture vision.

Job Duties

  • Extensive understanding of SQL queries. Ability to fine tune queries based on various RDBMS performance parameters such as indexes, partitioning, Explain plans and cost optimizers.
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS technologies stack
  • Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
  • Working with data scientists and industry leaders to understand data needs and design appropriate data models.
  • Participate in the design and development of the AWS-based data platform and data analytics.

Qualifications

Skills Needed

  • Design and implement data ETL frameworks for secured Data Lake, creating and maintaining an optimal pipeline architecture.
  • Examine complex data to optimize the efficiency and quality of the data being collected, resolve data quality problems, and collaborate with database developers to improve systems and database designs
  • Hands-on building data applications using AWS Glue, Lake Formation, Athena, AWS Batch, AWS Lambda, Python, Linux shell & Batch scripting.
  • Hands on experience with AWS Database services (Redshift, RDS, DynamoDB, Aurora etc.)
  • Experience in writing advanced SQL scripts involving self joins, windows function, correlated subqueries, CTE’s etc.
  • Strong understanding and experience using data management fundamentals, including concepts such as data dictionaries, data models, validation, and reporting.  

Education and Training

  • 10 years full-time software engineering experience preferred with at least 4 years in an AWS environment focused on application development.
  • Bachelor’s degree or foreign equivalent degree in Computer Science, Software Engineering, or related field
  • US citizenship required

#LI-LM03
#LI-Hybrid

See more jobs at Verisk

Apply for this job

+30d

Senior Data Engineer

SynackRemote in the US
c++

Synack is hiring a Remote Senior Data Engineer

Job Application for Senior Data Engineer at Synack

See more jobs at Synack

Apply for this job

+30d

Data Engineer

DevoteamTunis, Tunisia, Remote
airflowsqlscrum

Devoteam is hiring a Remote Data Engineer

Description du poste

Au sein de la direction « Plateforme Data », le consultant intégrera une équipe SCRUM et se concentrera sur un périmètre fonctionnel spécifique.

Votre rôle consistera à contribuer à des projets data en apportant votre expertise sur les tâches suivantes :

  • Concevoir, développer et maintenir des pipelines de données robustes et évolutifs sur Google Cloud Plateform (GCP), en utilisant des outils tels que BigQuery, Airflow, Looker et DBT.
  • Collaborer avec les équipes métier pour comprendre les exigences en matière de données et concevoir des solutions adaptées.
  • Optimiser les performances des traitements des données et des processus ELT en utilisant AirFlow, DBT et BigQuery.
  • Mettre en œuvre des processus de qualité des données pour garantir l'intégrité et la cohérence des données.
  • Travailler en étroite collaboration avec les équipes d'ingénierie pour intégrer les pipelines de données dans les applications et les services existants.
  • Rester à jour avec les nouvelles technologies et les meilleures pratiques dans le domaine du traitement des données et de l'analyse.

 

    Qualifications

    • Diplômé(e) d’un Bac+5 en école d'ingénieur ou équivalent universitaire avec une spécialisation en informatique.
    • Au moins 4 ans d'expérience dans le domaine de l'ingénierie des données, avec une expérience significative dans un environnement basé sur le cloud GCP.
    • Maîtrise avancée de SQL pour l'optimisation et le traitement des données.
    • Certification Google Professional Data Engineer est un plus.
    • Très bonne communication écrite et orale (livrables et reportings de qualité).

    See more jobs at Devoteam

    Apply for this job

    +30d

    Data Engineer (Australia)

    DemystDataAustralia, Remote
    SalesS3EC2Lambdaremote-firstDesignpythonAWS

    DemystData is hiring a Remote Data Engineer (Australia)

    Our Solution

    Demyst unlocks innovation with the power of data. Our platform helps enterprises solve strategic use cases, including lending, risk, digital origination, and automation, by harnessing the power and agility of the external data universe. We are known for harnessing rich, relevant, integrated, linked data to deliver real value in production. We operate as a distributed team across the globe and serve over 50 clients as a strategic external data partner. Frictionless external data adoption within digitally advancing enterprises is unlocking market growth and allowing solutions to finally get out of the lab. If you like actually to get things done and deployed, Demyst is your new home.

    The Opportunity

    As a Data Engineer at Demyst, you will be powering the latest technology at leading financial institutions around the world. You may be solving a fintech's fraud problems or crafting a Fortune 500 insurer's marketing campaigns. Using innovative data sets and Demyst's software architecture, you will use your expertise and creativity to build best-in-class solutions. You will see projects through from start to finish, assisting in every stage from testing to integration.

    To meet these challenges, you will access data using Demyst's proprietary Python library via our JupyterHub servers, and utilize our cloud infrastructure built on AWS, including Athena, Lambda, EMR, EC2, S3, and other products. For analysis, you will leverage AutoML tools, and for enterprise data delivery, you'll work with our clients' data warehouse solutions like Snowflake, DataBricks, and more.

    Demyst is a remote-first company. The candidate must be based in Australia.

    Responsibilities

    • Collaborate with internal project managers, sales directors, account managers, and clients’ stakeholders to identify requirements and build external data-driven solutions
    • Perform data appends, extracts, and analyses to deliver curated datasets and insights to clients to help achieve their business objectives
    • Understand and keep current with external data landscapes such as consumer, business, and property data.
    • Engage in projects involving entity detection, record linking, and data modelling projects
    • Design scalable code blocks using Demyst’s APIs/SDKs that can be leveraged across production projects
    • Govern releases, change management and maintenance of production solutions in close coordination with clients' IT teams
    • Bachelor's in Computer Science, Data Science, Engineering or similar technical discipline (or commensurate work experience); Master's degree preferred
    • 1-3 years of Python programming (with Pandas experience)
    • Experience with CSV, JSON, parquet, and other common formats
    • Data cleaning and structuring (ETL experience)
    • Knowledge of API (REST and SOAP), HTTP protocols, API Security and best practices
    • Experience with SQL, Git, and Airflow
    • Strong written and oral communication skills
    • Excellent attention to detail
    • Ability to learn and adapt quickly
    • Distributed working team and culture
    • Generous benefits and competitive compensation
    • Collaborative, inclusive work culture: all-company offsites and local get togethers in Bangalore
    • Annual learning allowance
    • Office setup allowance
    • Generous paid parental leave
    • Be a part of the exploding external data ecosystem
    • Join an established fast growth data technology business
    • Work with the largest consumer and business external data market in an emerging industry that is fueling AI globally
    • Outsized impact in a small but rapidly growing team offering real autonomy and responsibility for client outcomes
    • Stretch yourself to help define and support something entirely new that will impact billions
    • Work within a strong, tight-knit team of subject matter experts
    • Small enough where you matter, big enough to have the support to deliver what you promise
    • International mobility available for top performer after two years of service

    Demyst is committed to creating a diverse, rewarding career environment and is proud to be an equal opportunity employer. We strongly encourage individuals from all walks of life to apply.

    See more jobs at DemystData

    Apply for this job

    +30d

    Data Engineer - AWS

    Tiger AnalyticsJersey City,New Jersey,United States, Remote
    S3LambdaairflowsqlDesignAWS

    Tiger Analytics is hiring a Remote Data Engineer - AWS

    Tiger Analytics is a fast-growing advanced analytics consulting firm. Our consultants bring deep expertise in Data Science, Machine Learning and AI. We are the trusted analytics partner for multiple Fortune 500 companies, enabling them to generate business value from data. Our business value and leadership has been recognized by various market research firms, including Forrester and Gartner. We are looking for top-notch talent as we continue to build the best global analytics consulting team in the world.

    As an AWS Data Engineer, you will be responsible for designing, building, and maintaining scalable data pipelines on AWS cloud infrastructure. You will work closely with cross-functional teams to support data analytics, machine learning, and business intelligence initiatives. The ideal candidate will have strong experience with AWS services, Databricks, and Apache Airflow.

    Key Responsibilities:

    • Design, develop, and deploy end-to-end data pipelines on AWS cloud infrastructure using services such as Amazon S3, AWS Glue, AWS Lambda, Amazon Redshift, etc.
    • Implement data processing and transformation workflows using Databricks, Apache Spark, and SQL to support analytics and reporting requirements.
    • Build and maintain orchestration workflows using Apache Airflow to automate data pipeline execution, scheduling, and monitoring.
    • Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and deliver scalable data solutions.
    • Optimize data pipelines for performance, reliability, and cost-effectiveness, leveraging AWS best practices and cloud-native technologies.
    • 8+ years of experience building and deploying large-scale data processing pipelines in a production environment.
    • Hands-on experience in designing and building data pipelines on AWS cloud infrastructure.
    • Strong proficiency in AWS services such as Amazon S3, AWS Glue, AWS Lambda, Amazon Redshift, etc.
    • Strong experience with Databricks and Apache Spark for data processing and analytics.
    • Hands-on experience with Apache Airflow for orchestrating and scheduling data pipelines.
    • Solid understanding of data modeling, database design principles, and SQL.
    • Experience with version control systems (e.g., Git) and CI/CD pipelines.
    • Excellent communication skills and the ability to collaborate effectively with cross-functional teams.
    • Strong problem-solving skills and attention to detail.

    This position offers an excellent opportunity for significant career development in a fast-growing and challenging entrepreneurial environment with a high degree of individual responsibility.

    See more jobs at Tiger Analytics

    Apply for this job

    +30d

    Data Engineer - Snowflake

    Tiger AnalyticsChicago,Illinois,United States, Remote Hybrid
    Design

    Tiger Analytics is hiring a Remote Data Engineer - Snowflake

    Tiger Analytics is a fast-growing advanced analytics consulting firm. Our consultants bring deep expertise in Data Science, Machine Learning and AI. We are the trusted analytics partner for multiple Fortune 500 companies, enabling them to generate business value from data. Our business value and leadership has been recognized by various market research firms, including Forrester and Gartner. We are looking for top-notch talent as we continue to build the best global analytics consulting team in the world.

    The Data Engineer will be responsible for architecting, designing, and implementing advanced analytics capabilities. The right candidate will have broad skills in database design, be comfortable dealing with large and complex data sets, have experience building self-service dashboards, be comfortable using visualization tools, and be able to apply your skills to generate insights that help solve business challenges.We are looking for someone who can bring their vision to the table and implement positive change in taking the company's data analytics to the next level.

    Key Responsibilities:

    Data Integration:

    Implement and maintain data synchronization between on-premises Oracle databases and Snowflake using Kafka and CDC tools.

    Support Data Modeling:

    Assist in developing and optimizing the data model for Snowflake, ensuring it supports our analytics and reporting requirements.

    Data Pipeline Development:

    Design, build, and manage data pipelines for the ETL process, using Airflow for orchestration and Python for scripting, to transform raw data into a format suitable for our new Snowflake data model.

    Reporting Support:

    Collaborate with data architect to ensure the data within Snowflake is structured in a way that supports efficient and insightful reporting.

    Technical Documentation:

    Create and maintain comprehensive documentation of data pipelines, ETL processes, and data models to ensure best practices are followed and knowledge is shared within the team.

    Tools and Skillsets:

    Data engineering: proven track record of developing and maintaining data pipelines and data integration projects

    Databases: Strong experience with Oracle, Snowflake, and Databricks.

    Data Integration Tools: Proficiency in using Kafka and CDC tools for data ingestion and synchronization.

    Orchestration Tools: Expertise in Airflow for managing data pipeline workflows.

    Programming: Advanced proficiency in Python and SQL for data processing tasks.

    Data Modeling: Understanding of data modeling principles and experience with data warehousing solutions.

    Cloud Platforms: Knowledge of cloud infrastructure and services, preferably Azure, as it relates to Snowflake and Databricks integration.

    Collaboration Tools: Experience with version control systems (like Git) and collaboration platforms.

    CI/CD Implementation: Utilize CI/CD tools to automate the deployment of data pipelines and infrastructure changes, ensuring high-quality data processing with minimal manual intervention.

    Communication: Excellent communication and teamwork skills, with a detail-oriented mindset. Strong analytical skills, with the ability to work independently and solve complex problems.

    Requirements

    • 8+ years of overall industry experience specifically in data engineering
    • 5+ years of experience building and deploying large-scale data processing pipelines in a production environment.
    • Strong experience in Python, SQL, and PySpark
    • Creating and optimizing complex data processing and data transformation pipelines using python
    • Experience with “Snowflake Cloud Datawarehouse” and DBT tool
    • Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases
    • Understanding of Datawarehouse (DWH) systems, and migration from DWH to data lakes/Snowflake
    • Understanding of ELT and ETL patterns and when to use each. Understanding of data models and transforming data into the models
    • Strong analytic skills related to working with unstructured datasets
    • Build processes supporting data transformation, data structures, metadata, dependency and workload management

    This position offers an excellent opportunity for significant career development in a fast-growing and challenging entrepreneurial environment with a high degree of individual responsibility.

    See more jobs at Tiger Analytics

    Apply for this job

    +30d

    Senior Data Engineer

    AltUS Remote
    airflowpostgresDesignc++pythonAWS

    Alt is hiring a Remote Senior Data Engineer

    At Alt, we’re on a mission to unlock the value of alternative assets, and looking for talented people who share our vision. Our platform enables users to exchange, invest, value, securely store, and authenticate their collectible cards. And we envision a world where anything is an investable asset. 

    To date, we’ve raised over $100 million from thought leaders at the intersection of culture, community, and capital. Some of our investors include Alexis Ohanian’s fund Seven Seven Six, the founders of Stripe, Coinbase co-founder Fred Ehrsam, BlackRock co-founder Sue Wagner, the co-founders of AngelList, First Round Capital, and BoxGroup. We’re also backed by professional athletes including Tom Brady, Candace Parker, Giannis Antetokounmpo, Alex Morgan, Kevin Durant, and Marlon Humphrey.

    Alt is a dedicated equal opportunity employer committed to creating a diverse workforce. We celebrate our differences and strive to create an inclusive environment for all. We are focused on fostering a culture of empowerment which starts with providing our employees with the resources needed to reach their full potential.

    What we are looking for:

    We are seeking a Senior Data Engineer who is eager to make a significant impact. In this role, you'll get the opportunity to leverage your technical expertise and problem-solving skills to solve some of the hardest data problems in the hobby. Your primary focus in this role will be on enhancing and optimizing our pricing engine to support strategic business goals. Our ideal candidate is passionate about trading cards, has a strong sense of ownership, and enjoys challenges. At Alt, data is core to everything we do and is a differentiator for our customers. The team’s scope covers data pipeline development, search infrastructure, web scraping, detection algorithms, internal toolings and data quality. We give our engineers a lot of individual responsibility and autonomy, so your ability to make good trade-offs and exercise good judgment is essential.

    The impact you will make:

    • Partner with engineers, and cross-functional stakeholders to contribute to all phases of algorithm development including: ideation, prototyping, design, and production
    • Build, iterate, productionize, and own Alt's valuation models
    • Leverage background in pricing strategies and models to develop innovative pricing solutions
    • Design and implement scalable, reliable, and maintainable machine learning systems
    • Partner with product to understand customer requirements and prioritize model features

    What you bring to the table:

    • Experience: 5+ years of experience in software development, with a proven track record of developing and deploying models in production. Experience with pricing models preferred.
    • Technical Skills: Proficiency in programming languages and tools such as Python, AWS, Postgres, Airflow, Datadog, and JavaScript.
    • Problem-Solving: A knack for solving tough problems and a drive to take ownership of your work.
    • Communication: Effective communication skills with the ability to ship solutions quickly.
    • Product Focus: Excellent product instincts, with a user-first approach when designing technical solutions.
    • Team Player: A collaborative mindset that helps elevate the performance of those around you.
    • Industry Knowledge: Knowledge of the sports/trading card industry is a plus.

    What you will get from us:

    • Ground floor opportunity as an early member of the Alt team; you’ll directly shape the direction of our company. The opportunities for growth are truly limitless.
    • An inclusive company culture that is being built intentionally to foster an environment that supports and engages talent in their current and future endeavors.
    • $100/month work-from-home stipend
    • $200/month wellness stipend
    • WeWork office Stipend
    • 401(k) retirement benefits
    • Flexible vacation policy
    • Generous paid parental leave
    • Competitive healthcare benefits, including HSA, for you and your dependent(s)

    Alt's compensation package includes a competitive base salary benchmarked against real-time market data, as well as equity for all full-time roles. We want all full-time employees to be invested in Alt and to be able to take advantage of that investment, so our equity grants include a 10-year exercise window. The base salary range for this role is: $194,000 - $210,000. Offers may vary from the amount listed based on geography, candidate experience and expertise, and other factors.

    See more jobs at Alt

    Apply for this job

    FanDuel is hiring a Remote Senior Data Platform Engineer

    Job Application for Senior Data Platform Engineer at FanDuel

    See more jobs at FanDuel

    Apply for this job

    +30d

    Lead Data Engineer

    DevoteamTunis, Tunisia, Remote
    airflowsqlscrum

    Devoteam is hiring a Remote Lead Data Engineer

    Description du poste

    Au sein de la direction « Plateforme Data », le consultant intégrera une équipe SCRUM et se concentrera sur un périmètre fonctionnel spécifique.

    Votre rôle consistera à contribuer à des projets data en apportant votre expertise sur les tâches suivantes :

    • Concevoir, développer et maintenir des pipelines de données robustes et évolutifs sur Google Cloud Plateform (GCP), en utilisant des outils tels que BigQuery, Airflow, Looker et DBT.
    • Collaborer avec les équipes métier pour comprendre les exigences en matière de données et concevoir des solutions adaptées.
    • Optimiser les performances des traitements des données et des processus ELT en utilisant AirFlow, DBT et BigQuery.
    • Mettre en œuvre des processus de qualité des données pour garantir l'intégrité et la cohérence des données.
    • Travailler en étroite collaboration avec les équipes d'ingénierie pour intégrer les pipelines de données dans les applications et les services existants.
    • Rester à jour avec les nouvelles technologies et les meilleures pratiques dans le domaine du traitement des données et de l'analyse.

     

      Qualifications

      • Diplômé(e) d’un Bac+5 en école d'ingénieur ou équivalent universitaire avec une spécialisation en informatique.
      • Au moins 3 ans d'expérience dans le domaine de l'ingénierie des données, avec une expérience significative dans un environnement basé sur le cloud GCP.
      • Maîtrise avancée de SQL pour l'optimisation et le traitement des données.
      • Certification Google Professional Data Engineer est un plus.
      • Très bonne communication écrite et orale (livrables et reportings de qualité).

      See more jobs at Devoteam

      Apply for this job

      +30d

      Sr. Data Engineer

      Talent ConnectionPleasanton, CA, Remote
      Designjava

      Talent Connection is hiring a Remote Sr. Data Engineer

      Job Description

      Position Overview: As a Sr. Data Engineer, you will be pivotal in developing and maintaining data solutions that enhance our client's reporting and analytics capabilities. You will leverage a variety of data technologies to construct scalable, efficient data pipelines that support critical business insights and decision-making processes.

      Key Responsibilities:

      • Architect and design data pipelines that meet reporting and analytics requirements.
      • Develop robust and scalable data pipelines to integrate data from diverse sources into a cloud-based data platform.
      • Convert business needs into architecturally sound data solutions.
      • Lead data modernization projects, providing technical guidance and setting design standards.
      • Optimize data performance and ensure prompt resolution of issues.
      • Collaborate with cross-functional teams to create efficient data flows.

      Qualifications

      Required Skills and Experience:

      • 7+ years of experience in data engineering and pipeline development.
      • 5+ years of experience in data modeling for data warehousing and analytics.
      • Proficiency with modern data architecture and cloud data platforms, including Snowflake and Azure.
      • Bachelor’s Degree in Computer Science, Information Systems, Engineering, Business Analytics, or a related field.
      • Strong skills in programming languages such as Java and Python.
      • Experience with data orchestration tools and DevOps/Data Ops practices.
      • Excellent communication skills, capable of simplifying complex information.

      Preferred Skills:

      • Experience in the retail industry.
      • Familiarity with reporting tools such as MicroStrategy and Power BI.
      • Experience with tools like Streamsets and dbt.

      See more jobs at Talent Connection

      Apply for this job