Data Engineer Remote Jobs

13 Results

18d

Data Engineer Consultant

salesforceDesignapi

Price Benowitz LLP is hiring a Remote Data Engineer Consultant

Data Engineer Consultant - Price Benowitz LLP - Career Page

See more jobs at Price Benowitz LLP

Apply for this job

+30d

AWS Data Engineer

XpanxionRemote
agile3 years of experiencesqlDesignapic++csspythonAWSjavascript

Xpanxion is hiring a Remote AWS Data Engineer

AWS Data Engineer - UST Xpanxion - Career Page - Displays courtesy and sensitivity. Manages difficult or emotional customer situations. Meets commitments. Responds promptly to customer n

See more jobs at Xpanxion

Apply for this job

+30d

Data Engineer II

BlueLabsRemote or Washington, District of Columbia, United States
terraformairflowsqloracleDesignapidockerpostgresqlkuberneteslinuxjenkinspythonAWS

BlueLabs is hiring a Remote Data Engineer II

About BlueLabs

BlueLabs is a leading provider of analytics services and technology for a variety of industry clients: including government, business, and political campaigns. We help our clients optimize their engagements with individual customers, supporters, and stakeholders to achieve their goals. Simply put: we help our partners do the most good by getting the most from their data. 


Today, our team of data analysts, scientists, engineers, and strategists come together from diverse backgrounds to share a passion for using data to solve the world’s greatest social and analytical challenges. We’ve served more than 400 organizations ranging from government agencies, advocacy groups, political campaigns, and businesses. Along the way, we’ve developed some of the most innovative tools available in analytics, media optimization, reporting, and influencer outreach-- serving a diverse set of industries, including the automotive, travel, consumer packaged goods, entertainment, healthcare, media, telecom, and more.


About the team:

The BlueLabs Civic Technology practice revolutionizes the way government agencies use data to streamline operations, interact with citizens, and react to their biggest challenges. Our team develops deep expertise within the areas our clients care about most, then builds data analytics systems that create impact at scale. We work closely with internal government innovation groups, and build on the analytics methodology pioneered in e-commerce, advocacy, politics, and consumer finance.


You will work on a fast-paced, impact-driven project team charged with intelligently improving the operations powering the policies and programs that Americans interact with on a daily basis. Our work builds the frameworks and infrastructure that allows agencies to deal with big public health challenges. We bring definition to challenging problems, creatively identify data sources, then design analytics that accurately measure impact and move the needle every day. We work in changing, ambiguous environments, so we embrace nuance, inclusivity, and complexity to deliver programs that work for the diverse groups of individuals who rely on us.


About the role:

You will play a critical role in supporting a digital marketing solution for several high-profile websites run by the Centers for Medicare and Medicaid Services (CMS). The Data Ingestion Pipeline Developer will work with a team to produce dynamic, personalized email and SMS campaigns to deliver information relevant to each customer. You should have experience working in a cross-functional team of partners in which you were responsible for driving ____.


In this position you will:

  • Work closely with a team of  ingestion/application integration engineers (or ‘pipeline developers’?) with responsibility for the architecture, deployment, and operations of the infrastructure used by wider team
  • Work closely with data analysts and data scientists to identify and effectively respond to their specific needs, especially related to client deliverables
  • Develop, test, and operationalize the data infrastructure that will contribute to client-facing products, such as writing and reviewing API endpoints.
  • Contribute to the maintenance of a well documented, consistent codebase.
  • Provide visibility into data transformations by designing and implementing data tests throughout existing pipelines.
  • Extract business logic (ETL/ELT, metrics, metadata) from current data systems into portable cloud-agnostic layers.
  • Analyze, build, and deploy data models, including relational models for data warehousing.
  • Produce scalable, replicable code and engineering solutions that help automate repetitive data management tasks.
  • Support your teammates in implementing efficient and resilient data processes, with improvements such as optimization, good error handling, and incoming data checks.
  • Ensure the reliability, security, and performance of our program team infrastructure and our compliance with client policies and procedures, including remediating findings from regular systems auditing
  • Perform such other reasonable tasks as may be assigned by management


What we are seeking:

  • 3+ years of experience as a contributor to technical projects, such as working with complex data structures and pipelines 
  • Experience delivering on client priorities that operate on a regular deployment schedule while reacting to urgent ad hoc needs
  • Ability to manage your individual priorities and comfortably context-switch between active development, client discussion, and issue response
  • Effective communication skills when working with team members of varied backgrounds, roles, and functions
  • Experience performing nontrivial deployments of web applications and/or data pipelines and supported production infrastructures
  • Experience designing data models that keep in mind the upstream and downstream system dependencies
  • Advanced understanding of how to test and debug scripts that process and manipulate data using SQL, R, and/or Python
  • Extensive experience withAWS(our primary cloud provider)

  • Add more about Tech Stack? Terraform, containerization and cluster orchestration with Docker and Kubernetes, Linux systems, networking and security, Vertica,Redshift, Snowflake, Airflow or Databricks, Oracle or Postgresql, CI/CD environments using Jenkins and CircleCI, Apache Airflow, AWS Glue, dbt; Spark, Hadoop,Jupyter 

  • Passion in applying your skills to our social mission to problem-solve and collaborate on improving the functions of government agencies
  • The ability to successfully attain and maintain a Federal Public Trust background investigation that our government clients require; this includes a requirement that the individual has U.S. Citizenship or U.S. residency for three of the past five years


What recruitment looks like:

During the interview process, you will be asked questions to describe your background and experience relevant to the position. This may include providing examples of projects you worked on, tools or applications you've used, and knowledge you have applied. We often look for explanations of "how or why" so it's helpful to have details ready. The process will also include a technical assessment. 


What We Offer:

BlueLabs offers a friendly work environment and competitive benefits package including:

  • Premier health insurance plan
  • 401K matching
  • Unlimited vacation leave
  • Paid sick, personal, and volunteer leave
  • 13 paid holidays
  • 15 weeks paid parental leave
  • Professional development stipend & tuition reimbursement
  • Employee Assistance Program (EAP)
  • Supportive & collaborative culture 
  • Flexible working hours
  • Remote friendly (within the U.S.)
  • And more! 


The salary for this position is $95,000+ annually. 


While we have an office in Washington, DC, we are open to considering candidates from within the U.S. 


At BlueLabs, we celebrate, support and thrive on differences. Not only do they benefit our services, products, and community, but most importantly, they are to the benefit of our team. Qualified people of all races, ethnicities, ages, sex, genders, sexual orientations, national origins, gender identities, marital status, religions, veterans statuses, disabilities and any other protected classes are strongly encouraged to apply. As an equal opportunity workplace and an affirmative action employer, BlueLabs is committed to creating an inclusive environment for all employees. BlueLabs endeavors to make reasonable accommodations to the known physical or mental limitations of qualified applicants with a disability unless the accommodation would impose an undue hardship on the operation of our business. If an applicant believes they require such assistance to complete the application or to participate in an interview, or has any questions or concerns, they should contact the Director, People Operations.  BlueLabs participates in E-verify.EEO is the Law(Link to external DOL site)

See more jobs at BlueLabs

Apply for this job

+30d

Data Engineer I

BlueLabsRemote or Washington, District of Columbia, United States
tableauairflowsqloraclegitpostgresqlpythonAWS

BlueLabs is hiring a Remote Data Engineer I

About BlueLabs

BlueLabs is a leading provider of analytics services and technology dedicated to helping our partners do the most good with their data. Our team of analysts, scientists, engineers, and strategists hail from diverse backgrounds yet share a passion for using data to solve the world’s greatest social and analytical challenges. Since our inception we’ve worked with more than 400 organizations ranging from government agencies, advocacy groups, unions, political campaigns, and international groups. In addition, we service an ever-expanding portfolio of commercial clients in the automotive, travel, CPG, entertainment, healthcare, media, and telecom industries. Along the way, we’ve developed some of the most innovative tools available in analytics, media optimization, reporting, and influencer outreach.


About the team:

The Insights division creates and manages the underlying data that drives our day-to-day work. This is a new team at BlueLabs, created to meet our organization’s evolving client base and business needs, to ensure that BlueLabs is providing the most innovative data and analysis to our clients, all while developing and coaching team members to grow and respond to our company’s goals.

 

The BlueLabs Insights practice works with non-profit, political, and private sector clients to provide them with high quality analysis to better understand the environment they operate in and inform future decision making.

 

The Data Department is all about creating efficiency in the analytical process. The team helps ensure data pipelines are up and running, breaks down data silos to ensure we aren’t duplicating acquisition efforts, and creates documentation to help the Insights team better understand the data they are working with. The Data Department is also the go to for analyzing the value of new data for acquisition and exploring how to better utilize our internal data.

 

About the role:

As a Data Engineer I, you will help the Data Department and Insights project teams establish and maintain data pipelines for internal data sources and data sources related to our client engagements. The Data Engineer I is critical to our client work which requires proactive and continuous improvement as well as  prompt responsiveness to changing circumstances – particularly in handling data quality issues and adjusting the team’s end-to-end pipeline deployments. Your track record stepping into this role should reflect domain knowledge in data ingestion and transformation, including experience adapting to changing technologies and/or client priorities.  The Data Engineer reports to the Director, Data.

 

In this position you will:

  • Analyze, establish and maintain data pipelines that regularly deliver transformed data to data warehouses
  • Read in data, process and clean it, transform and recode it, merge different data sets together, reformat data between wide and long, etc.
  • Create documentation for data pipelines and data sets
  • Work closely with data analysts to understand, identify and effectively respond to their specific needs
  • Coordinate with the vendors, clients, and other stakeholders as needed to stand-up or respond to issues with data pipelines
  • Perform one-off data manipulation and analysis on a wide variety of campaigns data
  • Produce scalable, replicable code and engineering solutions that help automate repetitive data management tasks
  • Make recommendations and provide guidance on ways to make programs, campaigns, and data collection more efficient and effective
  • Ensure a high-level of data accuracy through regular quality control


What we are seeking:

  • About 2+ years of experience working with data pipeline solutions; beyond exact years, we seek candidates whose experience working with data pipelines provides the ability to proactively advise and then deploy solutions for our client based teams
  • Experience processing data using scripting languages like Python or R
  • Understanding of how to manipulate data using SQL or Python
  • Experience designing data models that accounts for upstream and downstream system dependencies
  • Experience working in modern data processing stacks using tools like Apache Airflow, AWS Glue, and dbt
  • Experience databases such as Amazon Redshift, Vertica, BigQuery, or Snowflake and/or experience writing complex analytics queries in a general purpose database such as Oracle or Postgresql
  • Familiarity with Git, or experience with other version control system
  • A high attention to detail and ability to effectively manage and prioritize several tasks or projects concurrently
  • Effective communication and collaboration skills when working with team members of varied backgrounds, roles, and functions
  • Passion in applying your skills to our social mission to problem-solve and collaborate within a cross-functional team environment
  • Ability to diagnose and improve database and query performance issues

 

You may also have experience:

  • Working on political campaigns or advocacy
  • Working with voter files or large consumer datasets
  • Working with geospatial data
  • Working with tools such as Phoenix, DNC Toolbox, VAN, PDI, or common finance, digital or field campaign platforms and their APIs
  • Working with business intelligence software like Tableau, PowerBi, or Data Studio

 

What we offer:

BlueLabs offers a friendly work environment and competitive benefits package including:

  • Salary: $85,000 annually
  • Premier health, dental, and vision insurance plans
  • 401K matching
  • Unlimited vacation
  • Paid sick, personal, and volunteer leave
  • 15 weeks paid parental leave
  • Professional development & learning stipend
  • Bring Your Own Device (BYOD) stipend
  • Employee Assistance Program (EAP)
  • Flexible working hours
  • Telecommuting/Remote options
  • Pre-tax transportation options
  • And more!

 

While we prefer this position to be in the Washington, DC area, we are open to considering candidates from within the U.S.


The salary range for candidates who meet the minimum posted qualifications reflects the Company’s good faith understanding and belief as to the wage range, and is accurate as of the date of this job posting.


To protect the health and safety of our workforce, as a company policy, BlueLabs strongly encourages all employees to be fully vaccinated against COVID-19 prior to beginning employment. BlueLabs adheres to all federal, state and local COVID-19 vaccination regulations. Except where prohibited by law, applicants who receive a conditional offer of employment will be required to produce proof of vaccination status prior to their first day of employment; if not the offer may be rescinded or employment terminated. BlueLabs will evaluate requests for reasonable accommodations for applicants unable to be vaccinated due to a religious belief, disability or pregnancy on an individualized basis in accordance with applicable laws.

 

At BlueLabs, we celebrate, support and thrive on differences. Not only do they benefit our services, products, and community, but most importantly, they are to the benefit of our team. Qualified people of all races, ethnicities, ages, sex, genders, sexual orientations, national origins, gender identities, marital status, religions, veterans statuses, disabilities and any other protected classes are strongly encouraged to apply. As an equal opportunity workplace and an affirmative action employer, BlueLabs is committed to creating an inclusive environment for all employees. BlueLabs endeavors to make reasonable accommodations to the known physical or mental limitations of qualified applicants with a disability unless the accommodation would impose an undue hardship on the operation of our business. If an applicant believes they require such assistance to complete the application or to participate in an interview, or has any questions or concerns, they should contact the Director, People Operations. BlueLabs participates in E-verify.EEO is the Law.


Collection of Personal Information Notice:

As you are likely aware, by submitting your job application, you are submitting personal information to our company. We collect various categories of personal information, including identifiers, protected classifications, professional or employment related information and sensitive personal information. We may retain and use this information for up to three years, in order to come to a decision on whether or not you are a good fit for our company. We may also retain or use some of this information to comply with any requirements under law, or for purposes of defending ourselves in any litigation. We do not use this information for any other purpose, or share it with third parties, unless you become an employee. To learn more, or to see our fully Notice to Job Applicants, please click here.

See more jobs at BlueLabs

Apply for this job

+30d

Data Engineer

BlueLabsRemote or Washington, District of Columbia, United States
tableauairflowsqloraclemobilegitjavac++postgresqlpythonAWS

BlueLabs is hiring a Remote Data Engineer

About BlueLabs

BlueLabs is a leading provider of analytics services and technology for a variety of industry clients: including government, business, and political campaigns. We help our clients optimize their engagements with individual customers, supporters, and stakeholders to achieve their goals. Simply put: we help our partners do the most good by getting the most from their data. 


Today, our team of data analysts, scientists, engineers, and strategists come together from diverse backgrounds to share a passion for using data to solve the world’s greatest social and analytical challenges. We’ve served more than 400 organizations ranging from government agencies, advocacy groups, unions, political campaigns, international groups, and companies. Along the way, we’ve developed some of the most innovative tools available in analytics, media optimization, reporting, and influencer outreach-- serving a diverse set of industries, including the automotive, travel, consumer packaged goods, entertainment, healthcare, media, telecom, and more.


About the team:

The Insights division creates and manages the underlying data that drives our day-to-day work. This is a new team at BlueLabs, created to meet our organization’s evolving client base and business needs, to ensure that BlueLabs is providing the most innovative data and analysis to our clients, all while developing and coaching team members to grow and respond to our company’s goals.

 

The BlueLabs Insights practice works with non-profit, political, and private sector clients to provide them with high quality analysis to better understand the environment they operate in and inform future decision making. 


Ripple is a proprietary, business-to-business technology product that helps our clients identify, engage, and measure impact on influencers who matter to their causes or brands. You will join a small and growing team of data, engineering, and product professionals to help us solve some exciting and challenging problems and to deploy the next generation of Ripple. 


About the role:

As a Data Engineer, you will help the Ripple team establish and maintain data pipelines for internal data sources and data sources related to our client engagements. The Data Engineer is critical to our client work, which requires proactive and continuous improvement as well as  prompt responsiveness to changing circumstances – particularly in handling data quality issues and adjusting the team’s end-to-end pipeline deployments. Your track record stepping into this role should reflect domain knowledge in data ingestion and transformation, including experience adapting to changing technologies and/or client priorities.  The Data Engineer reports to the Ripple Product Manager. 


In this position you will:

  • Analyze, establish and maintain data pipelines that regularly deliver transformed data to data warehouses
  • Read in data, process and clean it, transform and recode it, merge different data sets together, reformat data between wide and long, etc.
  • Create documentation for data pipelines and data sets
  • Work closely with data analysts to understand, identify and effectively respond to their specific needs
  • Coordinate with the vendors, clients, and other stakeholders as needed to stand-up or respond to issues with data pipelines
  • Perform one-off data manipulation and analysis on a wide variety of data sets
  • Produce scalable, replicable code and engineering solutions that help automate repetitive data management tasks
  • Make recommendations and provide guidance on ways to make data collection more efficient and effective
  • Develop and streamline our internal data resources into more efficient and easier to understand taxonomies
  • Ensure a high-level of data accuracy through regular quality control


What we are seeking:

  • About 2+ years of experience working with data pipeline solutions; beyond exact years, we seek candidates whose experience working with data pipelines provides the ability to proactively advise and then deploy solutions for our client based teams
  • Experience processing data using scripting languages like Python or R, or compiled languages like Java, C++, or Scala. Python preferred
  • Understanding of how to manipulate data using SQL or Python
  • Experience designing data models that accounts for upstream and downstream system dependencies
  • Experience working in modern data processing stacks using tools like Apache Airflow, AWS Glue, and dbt
  • Experience with an MPP database such as Amazon Redshift, Vertica, BigQuery, or Snowflake and/or experience writing complex analytics queries in a general purpose database such as Oracle or Postgresql
  • Familiarity with Git, or experience with other version control system
  • A high attention to detail and ability to effectively manage and prioritize several tasks or projects concurrently 
  • Effective communication and collaboration skills when working with team members of varied backgrounds, roles, and functions
  • Passion in applying your skills to our social mission to problem-solve and collaborate within a cross-functional team environment
  • Ability to diagnose and improve database and query performance issues


You may also have experience:

  • Working on political campaigns or in progressive advocacy
  • Working with voter files or large consumer datasets
  • Working with geospatial data
  • Working with business intelligence software like Tableau, PowerBi, or Data Studio


Recruitment process

We strive to hire efficiently and transparently. We expect to hire this position in January 2023. To get there, we anticipate the successful candidate will complete three interviews (HR 15 minutes, technical interview 45 minutes, and team interview 60 minutes), all virtually. 


What We Offer:

BlueLabs offers a friendly work environment and competitive benefits package including:

  • Premier health insurance plan
  • 401K matching
  • Unlimited vacation leave
  • Paid sick, personal, and volunteer leave
  • 15 weeks paid parental leave
  • Professional development & learning stipend
  • Macbook Pro laptop & tech accessories
  • Bring Your Own Device (BYOD) stipend for mobile device
  • Employee Assistance Program (EAP)
  • Supportive & collaborative culture 
  • Flexible working hours
  • Telecommuting/Remote options
  • Pre-tax transportation options 
  • Lunches and snacks
  • And more! 


The salary for this position is$85,000annually. 


While we prefer this position to be in the Washington, DC area, we are open to considering candidates from within the U.S. 


At BlueLabs, we celebrate, support and thrive on differences. Not only do they benefit our services, products, and community, but most importantly, they are to the benefit of our team. Qualified people of all races, ethnicities, ages, sex, genders, sexual orientations, national origins, gender identities, marital status, religions, veterans statuses, disabilities and any other protected classes are strongly encouraged to apply. As an equal opportunity workplace and an affirmative action employer, BlueLabs is committed to creating an inclusive environment for all employees. BlueLabs endeavors to make reasonable accommodations to the known physical or mental limitations of qualified applicants with a disability unless the accommodation would impose an undue hardship on the operation of our business. If an applicant believes they require such assistance to complete the application or to participate in an interview, or has any questions or concerns, they should contact the Director, People Operations.  BlueLabs participates in E-verify.

See more jobs at BlueLabs

Apply for this job

+30d

Data Engineering

Find the job @ApsideRemote job, Remote
agilegit

Find the job @Apside is hiring a Remote Data Engineering

Description de l’offre d’emploi

Envie de rejoindre une entreprise apprenante ? Engagée pour t’accompagner dans ton évolution professionnelle et dans tes projets personnels ?

Rejoins Apsid’EA pour travailler en qualité de Data Engineer (H/F) !

Le poste ?

Intégré(e) au sein d’une équipe à taille humaine, pour le compte de notre client, tu rejoins nos équipes pour contribuer à la réussite de nos projets auxquels tu apporteras tes compétences et ton esprit d’équipe.

Tu participes à l’ensemble des activités et proposes des solutions adaptées aux besoins de nos projets.

Tu évolues dans un contexte AGILE au sein d’une ambiance de travail collaborative.

Tu as la possibilité de contribuer à/au(x) :

Retro engineering du code existant pour l’intégrer dans un nouveau Template

Découpage Front/Back

Distribution des calculs sur une infrastructure distribuée

Recette des développements et analyse des écarts.

Les livrables attendus seront :

Code lié au calcul d’indicateurs de risques (KRI)

Retro engineering du code existant pour l’intégrer dans un nouveau Template

Découpage Front/Back

Distribution des calculs sur une infrastructure distribués

Poste à pourvoir en CDI.

See more jobs at Find the job @Apside

Apply for this job

+30d

Data Engineer

agileremote-firstkotlinDesignkubernetespython

YAZIO GmbH is hiring a Remote Data Engineer

Hey there!

Are you an ambitious self-starter passionate about data and technology? Do you enjoy working with large data sets, contributing to technological decisions and taking part in implementing new technologies? Are you excited about joining a dynamic team and helping people all over the world live healthier lives? 

If your answer is yes to all of these questions, you’d be a great addition to our Data Engineering Team and could help take YAZIO to the next level.

You would like to know what to expect? You will find an overview of the Data Engineering technology stack here.

This job is 100% remote and can be based in Germany, Italy, Spain, Portugal or the U.K.

Your Mission
  • Design, develop and maintain the foundation for YAZIO’s data-driven decision-making
  • Write and operate a wide range of data pipelines using Kotlin, Dagster and Kubernetes
  • Ingest and transform data from various APIs and sources
  • Understand and optimize query performance for our data lake
  • Deploy and operate various services for our data analysts, such as Apache Zeppelin, Dash and Trino
  • Work on a data lake that grows by approximately 200 million records per day
  • Help make technological decisions from day one
Your Profile
  • 3+ years of experience in either data engineering, software engineering or system administration
  • Strong ambition to learn what you don’t know about the tasks above
  • An independent, goal-oriented work ethic
  • Open-minded towards the implementation of new technology
  • Some experience with Agile and DevOps methodologies
  • Experience in some of the following or comparable technologies: Kotlin, Python, Kubernetes, Trino, Apache Hive, Apache Spark, Dagster, Apache Parquet
  • A good grasp on information security best practices
  • Fluent in English with excellent written communication skills; German is a plus
  • Interest in nutrition and fitness is a plus
Why us?
  • An exciting product with millions of users in over 150 countries, localized in 20 languages
  • A remote-first culture, working 100% from home with options to join a co-working space and to work from abroad for several weeks
  • An international team with English as our company language
  • Access to state-of-the-art technical equipment (e.g., Macbook, external monitor)
  • 30 days of paid vacation
  • High-impact work environment with short decision-making processes
  • A work culture characterized by focus and efficiency: We do not work overtime and, on the rare occasions this is necessary, you can take the additional hours off at another time.
  • Yearly company retreat and additional team events online and offline
Sound like you?
Ready to take YAZIO to the next level together and help people all over the world lead healthier lives? We look forward to hearing from you and receiving:
  • Your CV
  • A short introduction or video explaining who you are and why you want to work at YAZIO
  • Feel free to share something that shows us a little more about your personality and interests (e.g., your Twitter or Instagram account or your blog/website).
About us
YAZIO was founded in 2014 and, with millions of users, YAZIO is one of the most successful nutrition apps in the world. YAZIO has a mission: To help as many people as possible live healthier lives through better nutrition. With users in more than 150 countries, we’re well on our way to accomplishing this goal. As a remote-first company, we promote a modern form of employment in which our team works together across several cities and countries.
 
Find out more about our team, our application process and open positions here: www.yazio.com/en/jobs

See more jobs at YAZIO GmbH

Apply for this job

+30d

Lead Data Engineer

InnovateEDURemote, New York, United States
jiraterraformairflowsqlslackdockerpythonbackend

InnovateEDU is hiring a Remote Lead Data Engineer

About InnovateEDU

InnovateEDU is a non-profit whose mission is to eliminate the opportunity gap by accelerating innovation in standards-aligned, next generation learning models and tools that serve, inform, and enhance teaching and learning. InnovateEDU is committed to massively disrupting K-12 public education by focusing on the development of scalable tools and practices that leverage innovation, technology, and new human capital systems to improve education for all students and close the opportunity gap.


About the Project

InnovateEDU strives to create real tooling and projects that greatly assist a school/district/state in moving toward embracing data standards, a data-driven culture, and data interoperability. Landing Zone, a project at InnovateEDU, provides school districts with a comprehensive cloud based data infrastructure through the implementation of an Ed-Fi Operational Data Store (ODS), data mart for analytics in Google BigQuery, and the necessary data workflows in Apache Airflow to connect previously siloed, disparate educational data systems. Landing Zone simplifies the process a district must go through to implement an Ed-Fi ODS, connecting Ed-Fi certified data sources, and consuming non Ed-Fi certified data once it has been aligned to the standard. This project has a heavy focus on data engineering, backend work, dev ops, and using data analytics tools to verify data.


Who You Are

You are a mission-driven individual and believe in working to close the educational opportunity gap through the use of data and technical solutions. You are excited about bringing order to disparate data, and writing data pipelines, and don’t mind being relentless in pursuing data accuracy. You’ve previously worked with SQL and Python and written code that interacts with APIs.


You are an optimistic problem-solver. You believe that together we can create real solutions that help the entire education sector move forward despite its complexity. You are excited to join a small but growing team working on an early-stage product and are looking forward to working on many different pieces of that product. You are open to feedback, bring your best every day, and are ready to grow in all areas of your work. You want to join a team of folks who share your vision for mission-driven work at the intersection of education and technology.  Finally, you know that sharing often is key to this work, and are ready to document everything that you do so that data people in schools everywhere can benefit. This is not a big data project; we have smaller amounts of data across many domains.  


Experience and Skills

You are a good fit if you:

  • Have worked as a data analyst or data engineer in the past and are familiar with validating data and tools like Google BigQuery and Google Data Studio
  • Have strong computer science fundamentals and experience with Python and specifically with Apache Airflow 
  • Experience with dbt
  • Experience with ETL and tools like Pandas and Jupyter Notebooks, 
  • Consider yourself as having a very high attention to detail
  • Have strong communication skills with both technical and non-technical people
  • Are passionate about making an impact in K-12 education
  • Are comfortable doing many different types of tasks and having to context switch between tasks relativity often
  • Are passionate about building the best version of whatever you’re working on
  • Are highly motivated to work autonomously, with strong organizational and time management skills


You’ll have an edge if you:

  • Experience and knowledge of Kubernates, Docker, and Terraform
  • Have worked in K-12 education in the past


Responsibilities


The Lead Data Engineer’s primary professional responsibilities will include, but not be limited to:

  • Managing and supervising a team of engineers and data analysts
  • Establishing a team culture and communication cadence which includes daily standup, code reviews, and ensuring timely customer responses
  • Collaborating with the Customer Success Lead and team to ensure cohesion between engineering and implementation
  • Leading estimates and work scope development for custom engineering work for customers 
  • Mentoring, teaching, and aiding in the professional development of team members
  • Implementing and maintaining Landing Zone for new and returning customers
  • Leading the creation, troubleshooting, and maintenance of data processing pipelines in Apache Airflow (ETL work)
  • Running reports and exports in edTech source systems as well as Landing Zone infrastructure to perform data validation checks and communicate those back to our customers
  • Maintaining Landing Zone documentation to ensure it is always up-to-date and reflective of how integrations function
  • Deploying code updates across the Landing Zone customer base
  • Leading the deployment of infrastructure on the Google Cloud Platform for new customers
  • Leading in the development of a historical/longitudinal data storage system (data warehouse)
  • Responding to customer support tickets (this is a shared responsibility on our team)
  • Working with internal systems such as JIRA, Asana, Slack to stay organized and ensure communication with team members
  • Other duties as assigned 


What to expect in the hiring process:

  • An introductory phone call with a Manager
  • A coding project that will take about 2 hours. This will be in Python and be related to processing data
  • A project review and feedback call with the team 
  • Final round interviews, likely including our Executive Director


The range for this position will be $110,000 to $148,000.  Salary is commensurate with education and experience.  


Application Instructions

Please submit an application on this platform.Applications without both a resume and cover letter will not be considered.



See more jobs at InnovateEDU

Apply for this job

+30d

Middle Data Engineer

OpenVPNRemote job, Remote
sqlDesignazurelinux

OpenVPN is hiring a Remote Middle Data Engineer

As one of the world leaders in the cybersecurity space, OpenVPN is looking for a Mid-level Data Engineer who will join our existing Data Engineering team to continue building and expanding our company’s data platform.


Our philosophy is that we are a small, closely-knit team, and we care deeply about you:

  • Competitive pay rates
  • Fully remote work environments
  • Generous time off opportunities
  • Team trips and special events
  • A family-like work atmosphere

You will work with:

  • Improve data design efficiency

  • Deploy company policies around data access

  • Operate and repair ETLs

  • Perform migrations from current architecture to new architecture

  • Tune query performance

  • Respond to incidents

Challenges:

  • Help business grow with data demand in 3 orders of magnitude

  • Build a robust data handling and presentation framework

  • Build upon Google GCP, BigQuery, CloudSQL, Dataflow

See more jobs at OpenVPN

Apply for this job

+30d

Data Engineer

Tutela TechnologiesBoston, Massachusetts, United States
airflowsqlDesignpythonAWS

Tutela Technologies is hiring a Remote Data Engineer

Job title: Data Engineer

Department: Engineering

Reporting to: Director of Engineering

Location: Remote - US / Boston, MA


How will you make an impact...

Opensignal is looking for a Data Engineer to join our Engineering team. If you have a passion for solving difficult problems, a desire to continue learning and strong programming fundamentals, then we want to speak with you.


As a Data Engineer, you’ll join a team focused on building data pipelines to support new and existing products as well as optimizing existing processes and integrating new data sets. The candidate will be involved in all aspects of the software development life cycle, including gathering business requirements, analysis, design, development and production support. The successful candidate will be responsible for implementing and supporting highly efficient and scalable MSSQL and Python processes. The developer should be able to work collaboratively with other team members, as well as users for operational support. The candidate must be focused, hard-working and self-motivated, and enjoy working on complex problems.


What you will be doing?

  • Support, maintain and evolve existing data pipelines utilizingMSSQL, SSIS, Python. 
  • Implement business rule changes and enhancements in existing data pipelines.
  • Automate existing processes.
  • Document data mappings, data dictionaries, processes, programs and solutions as per established standards for data governance.
  • Troubleshoot data issues and defects to determine root cause.
  • Perform job monitoring, root cause analysis and resolution, and support production processes.
  • Perform tuning of SQL queries, and recommend and implement query tuning techniques
  • Recommend corrective action when necessary to improve performance, capture exceptions, and maintain scheduled jobs.
  • Assist in data migration needs of the department and company where applicable.
  • Develop ETL technical specifications, design, code, test, implement, and support optimal data solutions.
  • Create new pipelines in SQL / Python supporting new product development.
  • Design and develop SQL Server stored procedures, functions, views, transformation queries and triggers.
  • Take directions and complete tasks on-time with minimal supervision.
  • Recommend backup strategies for all data migration projects/applications.
  • Interact with IT management regarding work assignments and report status.
  • Simultaneously work on multiple projects with changing priorities.


Desired Skills & Abilities

  • Familiarity with Redshift, Aurora, Airflow, etc is highly desired.
  • Experience with AWS Environment or similar Cloud Services is highly preferred.
  • Strong data analysis and troubleshooting skills.
  • Production support experience.
  • Experience in the Telecom/cable industry.


Required Skills & Abilities

  • Minimum 5-6 years SQL development experience including design, development, testing, implementation and maintenance of SQL Server processes in both AWS andon-prem servers.
  • Basic familiarity with Python. 
  • Solid experience developing complex SQL statements, T-SQL wrappers & procedures, SSIS, functions, views, triggers, etc.
  • Excellent query-writing skills.
  • Strong knowledge of ETL and Data Warehouse development best practices.
  • Experience working with high volume data bases.
  • Experience in data mapping, data migration and data analysis.
  • Ability to work independently with minimal supervision.
  • Strong analytical and problem-solving skills.
  • Excellent verbal, written and interpersonal skills.
  • Ability to function effectively in a fast paced, team-oriented work environment.
  • Willingness to adapt and learn new technologies and methodologies.


At this time, Opensignal will not sponsor a new applicant for employment sponsorship for this position.


This employer participates in E-Verify and will provide the federal government with your Form I-9 information to confirm that you are authorized to work in the U.S.


About US

Opensignal is the leading global provider of independent insight and data into network experience and market performance. Our user-centric approach allows communication providers to constantly improve their network and maximise commercial performance.  Leading analysts, investors and financial institutions place a high value on our independent analysis and we are regular contributors to their reports.

 

Real network experience is our focus and ultimately that’s what influences customer choice. Our mission is to advance connectivity for all and here at Opensignal, the team is leading the industry in enabling operators to link their network experience and market performance in a way that has never before been possible.

 

With offices in London, Boston and Victoria, British Columbia, we are truly global, with employees working across four continents and representing over 25 nationalities. We are an equal opportunity employer dedicated to building an inclusive and diverse workforce. 


Benefits

We believe we are stronger when we not only celebrate our many differences, values, and voices but include them in everyday practice. Having a diverse and inclusive culture is essential, which is why we offer a flexible approach to work-life balance, operating in a remote-hybrid way. We’ll help you get set up with the essentials you need to work from home or the office. We also offer an attractive range of additional benefits, including:


  • Competitive compensation packages and global company ownership benefits.
  • Comprehensive group benefits package and company sponsored retirement savings plan (details depend on your country of work).
  • Professional development opportunities: education reimbursement, facilitator-led training, workshops, knowledge bites (internal learning talks) and more!
  • Generous holiday allowance, sick leave, parental leave, flexible working culture and the opportunity to work from abroad.
  • Charity matching, paid time off for community volunteering, mentorship, and DE&I program/committees.
  • Regular virtual and in-person events and socials.
  • We’ll support you to set up an effective home office environment.

See more jobs at Tutela Technologies

Apply for this job

+30d

Data Engineer

Lighthouse LabsRemote, Ontario, Canada
1 year of experiencesqlgitpython

Lighthouse Labs is hiring a Remote Data Engineer

Lighthouse Labs is looking to add a newData Engineerto help scale our next stage of growth as we expand into new markets, increase our offerings and diversify our education programs! This role will be focused on building and maintaining our Data Lake, and other existing data and machine learning pipelines. This role reports directly into the Head of Data and will work within our Data Team to build new data processing pipelines as well as data lake tables using dbt and assist in deploying machine learning models effectively to support our internal processes and to improve and personalize our existing learning products and experience.  The ideal candidate will be passionate about solving data problems using the right data engineering skills and tools.


What you’ll be doing:

  • Use specific tools and APIs to extract data from various  sources and store them in our data lake
  • Work with dbt to transform the data inside the data lake
  • Clean and prepare the data to make them easily accessible to other teams at Lighthouse Labs
  • Assist in selecting the best data engineering tools to support our growth
  • When needed, deploy and maintain machine learning models to improve the company’s decision making
  • Collaborate with tech and product teams to resolve data issues and ensure delivery and compliance 
  • Prepare appropriate  infrastructure so reports and dashboards can be easily delivered to stakeholders


What we need from you:

  • At least 1 year of experience working as a data engineer or 2 years as data scientist
  • Strong database knowledge in order to analyze and process data stored in data warehouses. 
  • Advanced SQL knowledge
  • Advanced knowledge of Python and Jupyter Notebooks
  • Understanding of command line and basic knowledge of Bash programming language
  • Good understanding of modern code development practices including DevOps/DataOps
  • Familiarity with dbt is an asset
  • Familiarity with BigQuery and other Google Cloud solution is an asset
  • Ability to work with version control tools (git)
  • Critical thinking and proven problem-solving abilities

 

Why you’ll like the job:  

What we offer:

  • A fast-paced culture focused on continuous learning and growth
  • 4 WEEKS PTO! (15 vacation days, 5 personal days)
  • Unlimited sick days
  • A remote working budget to get your home office up and running
  • A learning fund to support professional development
  • Flexible working hours
  • 100% employer-paid health benefits


About us: 

Lighthouse Labs was founded in 2013 with the mission to effectively and efficiently prepare the workforce with the analytical and technical skills necessary to succeed in a world of automation. With an initial focus on our open-enrolment developer bootcamp, we have grown into a leading provider of professional education services, delivering outstanding educational outcomes for our students. Our secret? Innovative curriculum, proprietary edtech, unique mentorship and career services and partnerships with government and industry leading organizations. We’re a bunch of quirky, inclusive and smart people who are changing lives by reimagining education - join us!


Lighthouse Labs is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. All positions at this time are remote, and we welcome all applicants. Talk to us to find out about our learning fund and other perks!

See more jobs at Lighthouse Labs

Apply for this job

+30d

Data Engineer | Python or Go (f/m/d) - Cologne, Amsterdam, London or remote (2)

DeepL sucht MitarbeiterRemote job, Remote
agileDesignmetalkubernetespython

DeepL sucht Mitarbeiter is hiring a Remote Data Engineer | Python or Go (f/m/d) - Cologne, Amsterdam, London or remote (2)

is Germany's best-known AI company. We develop neural networks to help people work with language. With DeepL Translator, we have created the world's best machine translation system and made it available free of charge to everyone online. Over the next few years, we aim to make DeepL the world's leading language technology company.

Our goal is to overcome language barriers and bring cultures closer together.


What distinguishes us from other companies?

DeepL (formerly Linguee) was founded by developers and researchers. We focus on the development of new, exciting products, which is why we spend a lot of time actively researching the latest topics. We understand the challenges of developing new products and try to meet them with an agile and dynamic way of working. Our work culture is very open because we want our employees to feel comfortable. In our daily work we use modern technologies - not only to translate texts, but also to create the world's best dictionaries, and solve other language problems.

When we tell people about DeepL as an employer, reactions are overwhelmingly positive. Maybe it's because they have enjoyed our services, or maybe they just want to get on board with our quest to break down language barriers and facilitate communication.


Your choice
We are constantly looking for outstanding employees! Currently we offer remote work in Germany, the Netherlands, the UK and Poland. Whether you would like to work from home in one of these countries or from one of our offices in Cologne or Paderborn: the choice is yours. No matter where you choose to work from, our way of working is designed to make you an essential part of the team.


What will you be doing at DeepL?

We are looking for an experienced Data Engineer to help build and improve our in-house Data Platform spanning multiple data centers. The Data Platform combines different data sources, both internal and external and makes them available to our stakeholders company-wide: Developers, Product Development, Data Science, and Management. You will work in a cross-functional team with Product Managers, Data Scientists, Data Engineers, and Developers to make the future at DeepL bright and data-driven.

See more jobs at DeepL sucht Mitarbeiter

Apply for this job

+30d

Senior Data Engineer (w/m/d) - Language Data

DeepL sucht MitarbeiterRemote job, Remote
ansiblec++pythonAWS

DeepL sucht Mitarbeiter is hiring a Remote Senior Data Engineer (w/m/d) - Language Data

ist das bekannteste KI-Unternehmen in Deutschland.  Wir entwickeln neuronale Netze, die Menschen beim Umgang mit Sprache unterstützen. Mit dem DeepL Übersetzer haben wir die international beste Computerübersetzung auf den Markt gebracht und stellen sie für jeden im Internet kostenlos zur Verfügung. In den nächsten Jahren möchten wir DeepL zum weltweit führenden Unternehmen für Sprachtechnologie ausbauen. 

Unser Ziel ist es, Sprachbarrieren zu überwinden und Kulturen einander näherzubringen.  
 

Was unterscheidet uns von anderen Unternehmen?

DeepL (früher Linguee) wurde von Entwicklern und Forschern gegründet. Die Entwicklung neuer spannender Produkte steht bei uns im Vordergrund, deswegen verwenden wir viel Zeit für die aktive Forschung an den aktuellsten Themen. Wir verstehen die Herausforderungen bei der Entwicklung neuer Produkte und versuchen diesen mit einer agilen und dynamischen Arbeitsweise zu begegnen. Unsere Arbeitskultur ist sehr offen, denn wir wollen, dass sich unsere Mitarbeiter*innen wohlfühlen. In unserer täglichen Arbeit setzen wir moderne Technologien ein - nicht nur um Texte zu übersetzen, sondern auch um die weltweit besten Wörterbücher zu schaffen oder andere sprachliche Probleme zu lösen.

Wenn wir von DeepL oder Linguee als Arbeitgeber erzählen, reagieren viele Leute sehr positiv darauf. Weil sie sich über die offenen, kostenlosen Dienste und Apps schon häufig gefreut haben. Und wir freuen uns, dass wir helfen, Sprachbarrieren zu verkleinern.


Arbeite, wo immer Du möchtest

Du kannst entscheiden, ob Du zu Hause arbeiten möchtest oder im Büro. Unsere Arbeitsweise ist ganz darauf ausgelegt, dass Du ein fester Bestandteil des Teams wirst, egal wo Du arbeitest. Daher suchen wir deutschlandweit nach herausragenden Mitarbeiter*innen.


Was machst Du zukünftig bei DeepL?

Damit unsere Übersetzungs-Technologie die Feinheiten der menschlichen Sprache lernen kann, benötigen wir enorme Mengen an linguistischen Daten. Du ergänzt ein kleines Team, welches sich um die Beschaffung, Filterung, Aufbereitung und Qualitätsbewertung dieser Daten kümmert. Dabei benutzt, verbesserst und erweiterst Du unsere Pipeline und orchestrierst Hunderte von Servern - sowohl auf dedizierter Hardware als auch in der Cloud.

See more jobs at DeepL sucht Mitarbeiter

Apply for this job