airflow Remote Jobs

85 Results

+30d

Data Analyst

RemoteRemote-LATAM
airflowsql

Remote is hiring a Remote Data Analyst

About Remote

Remote is solving global remote organizations’ biggest challenge: employing anyone anywhere compliantly. We make it possible for businesses big and small to employ a global team by handling global payroll, benefits, taxes, and compliance. Check out remote.com/how-it-works to learn more or if you’re interested in adding to the mission, scroll down to apply now.

Please take a look at remote.com/handbook to learn more about our culture and what it is like to work here. Not only do we encourage folks from all ethnic groups, genders, sexuality, age and abilities to apply, but we prioritize a sense of belonging. You can check out independent reviews by other candidates on Glassdoor or look up the results of our candidate surveys to see how others feel about working and interviewing here.

All of our positions are fully remote. You do not have to relocate to join us!

About Remote

Remote is solving global remote organizations’ biggest challenge: employing anyone anywhere compliantly. We make it possible for businesses big and small to employ a global team by handling global payroll, benefits, taxes, and compliance (learn more about how it works). We're backed by A+ investors and our team is world-class, literally and figuratively, as we're all scattered around the world.

Please check out our public handbook to learn more about our culture. We encourage folks from all ethnic groups, genders, sexuality, age and abilities to apply. You can also check out independent reviews by other candidates on Glassdoor. If this job description resonates with you, we want to hear from you!

All of our positions are fully remote. You do not have to relocate to join us!

How we work

We love working async and this means you get to do your own schedule.

We empower ownership and proactivity and when in doubt default to action instead of waiting.

The position

This is an exciting time to join Remote and make a personal difference in the global employment space as a Data Analyst, joining our Operations Data Analytics team. You will be the link between data producers and data consumers at Remote. You'll primarily focus on building out our data pipeline to unify our various data sources in a compliant manner. That being said, you should be committed to transforming data into readable insights, and help deliver goal-driven reports for continued innovation and growth.

Requirements

  • 2-4 years work experience in statistics, data analytics, software engineering or a related field; ideally in a fast-paced environment.
  • Strong ability to collaborate and build effective relationships with your colleagues.
  • Excellent communication skills and ability to document processes for both business and technical audiences.
  • A self-starter mentality and the ability to thrive in an unstructured and fast-paced environment.
  • Strong experience in SQL and previous experience with dbt.
  • Strong experience with Metabase or other data visualisation tools.
  • Proficiency with Git.
  • It's not required to have experience working remotely, but considered as a plus.

Key responsibilities

  • Data Exploration & Quality: contribute to a culture of higher standards by discovering, documenting and working towards improving the quality of our data.
  • Data Modelling: collaborate with data engineers and other data analysts and contribute to building the modelling layer of our data warehouse.
  • Data Analysis: support Operational Analytics team by creating an ad hoc data reports and dashboards.
  • Collaboration: collect requests from stakeholders and translate them into meaningful data. Collaborate with data engineers and other data analysts to create high-quality reports.

Our current core data stack (among other tools) contains Metabase, Retool, dbt, Redshift, Meltano, Airflow and GitLab.

Remote Compensation Philosophy

Remote's Total Rewards philosophy is to ensure fair, unbiased compensation and fair equity pay along with competitive benefits in all locations in which we operate. We do not agree to or encourage cheap-labor practices and therefore we ensure to pay above in-location rates. We hope to inspire other companies to support global talent-hiring and bring local wealth to developing countries.

For U.S. applicants: Across all US locations, the base salary range for this full-time position is $28,660 - $60,000 plus eligibility for equity. Our salary ranges are determined by role, level and location, and our job titles may span more than one career level. The actual base pay for the successful candidate in this role is dependent upon many factors such as location, transferable or job-related skills, work experience, relevant training, business needs, and market demands. The base salary range is subject to change and may be modified in the future.

We offer a generous benefits package to all full-time employees. In the U.S. this includes: a 401(k) plan + 4% employer match, unlimited paid time off, paid sick leave in excess of local requirements, paid parental leave, FSA, HSA, health, dental and vision plans for you .Click here for more information on our global employee benefits.

Practicals

  • You'll report to: Manager, Data Analytics
  • Team: Operations Data Analytics, ****Data, Engineering
  • Location: Anywhere in the World
  • Start date: As soon as possible

Application process

  1. (async) Profile review
  2. Interview with recruiter
  3. Interview with future manager
  4. (async) Small challenge
  5. (async) Challenge Review
  6. Interview with team members (no managers present)
  7. Prior employment verification check(s)
  8. (async) Offer

#LI-DNP

Benefits

Our full benefits & perks are explained in our handbook at remote.com/r/benefits. As a global company, each country works differently, but some benefits/perks are for all Remoters:
  • work from anywhere
  • unlimited personal time off (minimum 4 weeks)
  • quarterly company-wide day off for self care
  • flexible working hours (we are async)
  • 16 weeks paid parental leave
  • mental health support services
  • stock options
  • learning budget
  • home office budget & IT equipment
  • budget for local in-person social events or co-working spaces

How you’ll plan your day (and life)

We work async at Remote which means you can plan your schedule around your life (and not around meetings). Read more at remote.com/async.

You will be empowered to take ownership and be proactive. When in doubt you will default to action instead of waiting. Your life-work balance is important and you will be encouraged to put yourself and your family first, and fit work around your needs.

If that sounds like something you want, apply now!

How to apply

  1. Please fill out the form below and upload your CV with a PDF format.
  2. We kindly ask you to submit your application and CV in English, as this is the standardised language we use here at Remote.
  3. If you don’t have an up to date CV but you are still interested in talking to us, please feel free to add a copy of your LinkedIn profile instead.

We will ask you to voluntarily tell us your pronouns at interview stage, and you will have the option to answer our anonymous demographic questionnaire when you apply below. As an equal employment opportunity employer it’s important to us that our workforce reflects people of all backgrounds, identities, and experiences and this data will help us to stay accountable. We thank you for providing this data, if you chose to.

See more jobs at Remote

Apply for this job

+30d

Sr. Platform Engineer

InMarketRemote
agilescalaairflowDesignmobileqajavac++dockerkubernetespython

InMarket is hiring a Remote Sr. Platform Engineer

Job Title: Senior Platform Engineer

Location: Remote, US Only

 

About InMarket

Since 2010, InMarket has been the leader in 360-degree consumer intelligence and real-time activation for thousands of today’s top brands. Through InMarket's data-driven marketing platform, brands can build targeted audiences, activate media in real time, and measure success in driving return on ad spend. InMarket's proprietary Moments offering outperforms traditional mobile advertising by 6x.* Our LCI attribution platform, which won the MarTech Breakthrough Award for Best Advertising Measurement Platform, was validated by Forrester to drive an average of $40 ROAS for our clients. 

*Source: Wordstream US Google Display Benchmarks for Mobile Media

Job Description

As a Senior Platform Engineer at InMarket you will have the responsibility of developing and maintaining highly reliable, scalable and observable enterprise technology which plays a key part in providing valuable data and insights for Fortune 500 companies. In this role you must be able to own projects from start to finish: requirements gathering, project planning, collaborating with other team leads, task prioritization, testing and delivery prior to deadline. You should collaborate with, mentor, and influence members within your team and across other teams at InMarket. This position requires the ability to analyze and investigate problems, discover root issues and provide potential solutions with pros and cons.

Responsibilities:

  • Design and implement data driven initiatives using distributed compute engines such as Spark (Scala) and BigQuery (Python/SQL) 
  • Write reusable code that’s thoroughly QA’d and includes unit tests
  • Ability to analyze issues, identify root cause and derive solutions
  • Review code of peers and provide in-depth feedback
  • Collaborate with other team members to brainstorm and derive sound solutions that improve InMarket’s systems while limiting complexity
  • Communicate complex concepts and the results of the analyses in a clear and effective manner to Product Owners and Account Managers

Qualifications Required:

  • Strong data engineering experience in Scala, Java or Python
  • Experience with data pipelines and workflow management systems such as Airflow
  • Experience with distributed compute systems such as Spark, BigQuery, Hadoop, Hive
  • Experience with building and optimizing large scale and high-performance systems
  • Knowledge regarding microservices architecture using Kubernetes and Docker
  • Strong collaboration and communication skills within and across teams
  • B.S. or M.S. in Computer Science, Mathematics, or a related field
  • At least 4 years of engineering experience

 

Benefits Summary

  • Competitive salary, stock options, flexible vacation
  • Medical, dental and Flexible Spending Account (FSA)
  • Company Matched 401(k)
  • Unlimited PTO (Within reason)
  • Talented co-workers and management
  • Agile Development Program (For continued learning/professional development)
  • Paid Paternity & Maternity Leave

For candidates in California, Colorado, and New York City, the Targeted Base Salary Range for this role is $116,782 to $188,916. 

Actual salaries will vary depending on factors including but not limited to work experience, specialized skills and training, performance in role, business needs, and job requirements. Base salary is subject to change and may be modified in the future. Base salary is just one component of InMarket’s total rewards package that also may include bonus, equity, and benefits.  Ask your recruiter for more information!

At InMarket we are committed to a culture that supports diversity, inclusion, belonging and equal opportunity. We celebrate all people and believe everyone deserves respect regardless of race, gender, sexual orientation, backgrounds, experiences, abilities or beliefs.

InMarket is an Equal Opportunity Employer (EOE). Qualified applicants are considered for employment without regard to age, race, color, religion, sex, national origin, sexual orientation, disability, or veteran status.

Privacy Notice for California Job Applicants: https://inmarket.com/ca-notice-for-job-applicants/

#LI-Remote

 

 

See more jobs at InMarket

Apply for this job

+30d

Sr. Data Engineer

agileterraformairflowpostgressqlDesignapic++dockerkubernetesjenkinspythonAWSjavascript

hims & hers is hiring a Remote Sr. Data Engineer

Hims & Hers Health, Inc. (better known as Hims & Hers) is the leading health and wellness platform, on a mission to help the world feel great through the power of better health. We are revolutionizing telehealth for providers and their patients alike. Making personalized solutions accessible is of paramount importance to Hims & Hers and we are focused on continued innovation in this space. Hims & Hers offers nonprescription products and access to highly personalized prescription solutions for a variety of conditions related to mental health, sexual health, hair care, skincare, heart health, and more.

Hims & Hers is a public company, traded on the NYSE under the ticker symbol “HIMS”. To learn more about the brand and offerings, you can visit hims.com and forhers.com, or visit our investor site. For information on the company’s outstanding benefits, culture, and its talent-first flexible/remote work approach, see below and visit www.hims.com/careers-professionals.

We're looking for a savvy and experienced Senior Data Engineer to join the Data Platform Engineering team at Hims. As a Senior Data Engineer, you will work with the analytics engineers, product managers, engineers, security, DevOps, analytics, and machine learning teams to build a data platform that backs the self-service analytics, machine learning models, and data products serving over a million Hims & Hers users.

You Will:

  • Architect and develop data pipelines to optimize performance, quality, and scalability
  • Build, maintain & operate scalable, performant, and containerized infrastructure required for optimal extraction, transformation, and loading of data from various data sources
  • Design, develop, and own robust, scalable data processing and data integration pipelines using Python, dbt, Kafka, Airflow, PySpark, SparkSQL, and REST API endpoints to ingest data from various external data sources to Data Lake
  • Develop testing frameworks and monitoring to improve data quality, observability, pipeline reliability, and performance
  • Orchestrate sophisticated data flow patterns across a variety of disparate tooling
  • Support analytics engineers, data analysts, and business partners in building tools and data marts that enable self-service analytics
  • Partner with the rest of the Data Platform team to set best practices and ensure the execution of them
  • Partner with the analytics engineers to ensure the performance and reliability of our data sources
  • Partner with machine learning engineers to deploy predictive models
  • Partner with the legal and security teams to build frameworks and implement data compliance and security policies
  • Partner with DevOps to build IaC and CI/CD pipelines
  • Support code versioning and code deployments for data Pipelines

You Have:

  • 8+ years of professional experience designing, creating and maintaining scalable data pipelines using Python, API calls, SQL, and scripting languages
  • Demonstrated experience writing clean, efficient & well-documented Python code and are willing to become effective in other languages as needed
  • Demonstrated experience writing complex, highly optimized SQL queries across large data sets
  • Experience with cloud technologies such as AWS and/or Google Cloud Platform
  • Experience with Databricks platform
  • Experience with IaC technologies like Terraform
  • Experience with data warehouses like BigQuery, Databricks, Snowflake, and Postgres
  • Experience building event streaming pipelines using Kafka/Confluent Kafka
  • Experience with modern data stack like Airflow/Astronomer, Databricks, dbt, Fivetran, Confluent, Tableau/Looker
  • Experience with containers and container orchestration tools such as Docker or Kubernetes
  • Experience with Machine Learning & MLOps
  • Experience with CI/CD (Jenkins, GitHub Actions, Circle CI)
  • Thorough understanding of SDLC and Agile frameworks
  • Project management skills and a demonstrated ability to work autonomously

Nice to Have:

  • Experience building data models using dbt
  • Experience with Javascript and event tracking tools like GTM
  • Experience designing and developing systems with desired SLAs and data quality metrics
  • Experience with microservice architecture
  • Experience architecting an enterprise-grade data platform

Outlined below is a reasonable estimate of H&H’s compensation range for this role for US-based candidates. If you're based outside of the US, your recruiter will be able to provide you with an estimated salary range for your location.

The actual amount will take into account a range of factors that are considered in making compensation decisions including but not limited to skill sets, experience and training, licensure and certifications, and location. H&H also offers a comprehensive Total Rewards package that may include an equity grant.

Consult with your Recruiter during any potential screening to determine a more targeted range based on location and job-related factors. We don’t ever want the pay range to act as a deterrent from you applying!

An estimate of the current salary range for US-based employees is
$140,000$170,000 USD

We are focused on building a diverse and inclusive workforce. If you’re excited about this role, but do not meet 100% of the qualifications listed above, we encourage you to apply.

Hims is an Equal Opportunity Employer and considers applicants for employment without regard to race, color, religion, sex, orientation, national origin, age, disability, genetics or any other basis forbidden under federal, state, or local law. Hims considers all qualified applicants in accordance with the San Francisco Fair Chance Ordinance.

Hims & hers is committed to providing reasonable accommodations for qualified individuals with disabilities and disabled veterans in our job application procedures. If you need assistance or an accommodation due to a disability, you may contact us at accommodations@forhims.com. Please do not send resumes to this email address.

For our California-based applicants – Please see our California Employment Candidate Privacy Policy to learn more about how we collect, use, retain, and disclose Personal Information. 

See more jobs at hims & hers

Apply for this job

+30d

Software Engineer - Trust & Safety

CloudflareLondon or Remote UK
3 years of experienceterraformairflowsqlDesignpostgresqlmysqlkubernetes

Cloudflare is hiring a Remote Software Engineer - Trust & Safety

About Us

At Cloudflare, we have our eyes set on an ambitious goal: to help build a better Internet. Today the company runs one of the world’s largest networks that powers approximately 25 million Internet properties, for customers ranging from individual bloggers to SMBs to Fortune 500 companies. Cloudflare protects and accelerates any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks. Cloudflare was named to Entrepreneur Magazine’s Top Company Cultures list and ranked among the World’s Most Innovative Companies by Fast Company. 

We realize people do not fit into neat boxes. We are looking for curious and empathetic individuals who are committed to developing themselves and learning new skills, and we are ready to help you do that. We cannot complete our mission without building a diverse and inclusive team. We hire the best people based on an evaluation of their potential and support them throughout their time at Cloudflare. Come join us! 

About the team

Cloudflare’s mission is to help build a better internet and the Trust & Safety Engineering (TSENG) team lives at the core of that effort. The team empowers customers with products and services to combat abuse, enables deeper coordination with our external industry anti-abuse partners, and ensures that teams in the company can respond to the quickly changing legal landscape of the internet with scalable tools and services. This includes services that have a direct impact on efforts to stop the spread of CSAM (child sexual abuse material) across the internet. Read more about our CSAM Scanning tool here.

About the Role

Engineers on the Trust & Safety Engineering team are responsible for the entire software development lifecycle for our products and services which include both internal and customer-facing software. Whether closing gaps in our abuse processing pipeline, extending our Trust & Safety platform or road mapping the future of Trust & Safety solutions, software engineers on the Trust & Safety Engineering team are critical to Cloudflare’s ability to help make the internet a better place. 

This role will be based out of our London office.

What you'll do

While the majority of our services are now written in Golang, you will also work with technologies such as Rust, Kafka, Redis, Kubernetes, Terraform, Airflow, Temporal and PostgresSQL. We are looking for great engineers regardless of experience with any of these specific technologies.

Responsibilities include

  • Designing, building, running and scaling tools and services that support Trust and Safety efforts
  • Analyzing and communicating complex technical requirements and concepts, able to identify the highest priority areas and carve a path to deliver
  • Collaborating with T&S, legal and product teams to understand goals and develop robust and scalable solutions.
  • Improving system design and architecture to ensure stability and performance of the internal and customer-facing compliance and anti-abuse services
  • Ongoing monitoring and maintenance of production services, including participation in on call rotations
  • Working closely with Cloudflare's Trust and Safety team to help make the internet a safer place
  • Mentoring and guiding developers in the Trust and Safety Engineering team to help build collective knowledge and technical expertise

Desirable skills and experience

  • Minimum 3 years of experience building large-scale software applications, preferably distributed systems
  • Experience designing and integrating RESTful APIs and/or gRPC services
  • Knowledge of SQL and common relational database systems such as PostgreSQL and MySQL
  • Prior experience working with Go or Rust
  • Excellent debugging and optimization skills
  • Expertise in writing well tested code
  • Interest in opportunities to be a technical mentor for teammates

Bonus

(Relevant but not required - we love to learn on the job!)

  • Deep understanding of DNS, TLS/SSL and HTTP
  • Expertise in web security issues and industry standards for access control
  • Experience with Kafka
  • Experience building web applications using React
  • Experience with Kubernetes
  • Experience with Redis

 

What Makes Cloudflare Special?

We’re not just a highly ambitious, large-scale technology company. We’re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.

Project Galileo: We equip politically and artistically important organizations and journalists with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost.

Athenian Project: We created Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration.

Path Forward Partnership: Since 2016, we have partnered with Path Forward, a nonprofit organization, to create 16-week positions for mid-career professionals who want to get back to the workplace after taking time off to care for a child, parent, or loved one.

1.1.1.1: We released 1.1.1.1to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released. Here’s the deal - we don’t store client IP addresses never, ever. We will continue to abide by our privacy commitmentand ensure that no user data is sold to advertisers or used to target consumers.

Sound like something you’d like to be a part of? We’d love to hear from you!

This position may require access to information protected under U.S. export control laws, including the U.S. Export Administration Regulations. Please note that any offer of employment may be conditioned on your authorization to receive software or technology controlled under these U.S. export laws without sponsorship for an export license.

Cloudflare is proud to be an equal opportunity employer.  We are committed to providing equal employment opportunity for all people and place great value in both diversity and inclusiveness.  All qualified applicants will be considered for employment without regard to their, or any other person's, perceived or actual race, color, religion, sex, gender, gender identity, gender expression, sexual orientation, national origin, ancestry, citizenship, age, physical or mental disability, medical condition, family care status, or any other basis protected by law.We are an AA/Veterans/Disabled Employer.

Cloudflare provides reasonable accommodations to qualified individuals with disabilities.  Please tell us if you require a reasonable accommodation to apply for a job. Examples of reasonable accommodations include, but are not limited to, changing the application process, providing documents in an alternate format, using a sign language interpreter, or using specialized equipment.  If you require a reasonable accommodation to apply for a job, please contact us via e-mail athr@cloudflare.comor via mail at 101 Townsend St. San Francisco, CA 94107.

See more jobs at Cloudflare

Apply for this job

+30d

Data Engineer

Maker&Son LtdBalcombe, United Kingdom, Remote
tableauairflowsqlmongodbelasticsearchpythonAWS

Maker&Son Ltd is hiring a Remote Data Engineer

Job Description

We are looking for a highly motivated individual to join our team as a Data Engineer.

We are based in Balcombe [40 mins from London by train, 20 minutes from Brighton] and we will need you to be based in our offices at least 3 days a week.

You will report directly to the Head of Data.

Candidate Overview

As a part of the Technology Team your core responsibility will be to help maintain and scale our infrastructure for analytics as our data volume and needs continue to grow at a rapid pace. This is a high impact role, where you will be driving initiatives affecting teams and decisions across the company and setting standards for all our data stakeholders. You’ll be a great fit if you thrive when given ownership, as you would be the key decision maker in the realm of architecture and implementation.

Responsibilities

  • Understand our data sources, ETL logic, and data schemas and help craft tools for managing the full data lifecycle
  • Play a key role in building the next generation of our data ingestion pipeline and data warehouse
  • Run ad hoc analysis of our data to answer questions and help prototype solutions
  • Support and optimise existing ETL pipelines
  • Support technical and business stakeholders by providing key reports and supporting the BI team to become fully self-service
  • Own problems through to completion both individually and as part of a data team
  • Support digital product teams by performing query analysis and optimisation

 

Qualifications

Key Skills and Requirements

  • 3+ years experience as a data engineer
  • Ability to own data problems and help to shape the solution for business challenges
  • Good communication and collaboration skills; comfortable discussing projects with anyone from end users up to the executive company leadership
  • Fluency with a programming language - we use NodeJS and Python but looking to use Golang
  • Ability to write and optimise complex SQL statements
  • Familiarity with ETL pipeline tools such as Airflow or AWS Glue
  • Familiarity with data visualisation and reporting tools, like Tableau, Google Data Studio, Looker
  • Experience working in a cloud-based software development environment, preferably with AWS or GCP
  • Familiarity with no-SQL databases such as ElasticSearch, DynamoDB, or MongoDB

See more jobs at Maker&Son Ltd

Apply for this job

+30d

Engenheiro de Dados / Desenvolvedor

HolosMediaSão Paulo, Brazil, Remote
nosqlairflowsqllinuxpython

HolosMedia is hiring a Remote Engenheiro de Dados / Desenvolvedor

Descrição da vaga

Suas principais atribuições serão ligadas a área tech de desenvolvimento, mais especificamente, implementação, gestão e manutenção de diferentes pipelines de dados e desenvolvimento de soluções de automação de processos. Você será responsável por manter os pipelines atuais da empresa em funcionamento, buscando melhorias contínuas e trazendo potenciais inovações através de implementações de frameworks e stacks que possam aprimorar a qualidade dos processos. É importante que você tenha dinâmica para navegar em áreas diferentes de negócio, principalmente áreas relacionadas a Marketing e Comunicações, e um lado de desenvolvedor apurado, com boa capacidade técnica.

  • Levantamento, arquitetura, análise, desenvolvimento e implantação de projetos baseados em tecnologias de Business Intelligence e Data Science;
  • Extração, transformação e armazenamentos de dados nas várias camadas de dados hospedados na nuvem e populados por batch ou streaming data pipelines;
  • Identificação de requisitos de infra-estrutura para soluções de Business Intelligence e Data Science;
  • Definição de componentes, integração e implementação de soluções de Business Intelligence e Data Science;
  • Desenvolvimento de serviços utilizando APIs;
  • Projetar, desenvolver, testar, monitorar, gerenciar e validar atividades de data warehouse, incluindo processos de extração, transformação, movimentação, carregamento, limpeza e atualização de dados.

Qualificações

  • Ensino superior completo;
  • Experiência sólida com as linguagens: Python e SQL;
  • Experiência sólida na criação e manutenção de pipelines de transformação de dados (ETL/ELT);
  • Experiência em análise e modelagem de dados;
  • Experiência desenvolvendo APIs Rest com FastAPI, Django ou Flask;
  • Experiência sólida com Apache Airflow;
  • Experiência sólida com Apache Spark;
  • Experiência sólida com banco de dados SQL e NoSQL (ao menos um framework de cada);
  • Experiência sólida com servidores Linux;
  • Sólidos conhecimentos em Linux Shell;
  • Experiência com Datawarehouse (implementação e manutenção) modelo colunar.
  • Experiência com alguma ferramenta de dashboard (preferência DataStudio ou PowerBI);
  • Familiaridade com as melhores práticas de desenvolvimento de software;
  • Experiência com GitHub e metodologias de versionamento;
  • Experiência com desenvolvimento de software ou similar;
  • Inglês técnico avançado;
  • Motivado e autogerido.

See more jobs at HolosMedia

Apply for this job

+30d

Senior AI Systems Engineer

InMarketRemote (US-Only)
agileMaster’s DegreescalaairflowDesignmobilec++dockerkubernetespythonAWS

InMarket is hiring a Remote Senior AI Systems Engineer

Job Title:Senior AI Systems Engineer

Location:Remote - US Only

About InMarket

Since 2010, InMarket has been the leader in 360-degree consumer intelligence and real-time activation for thousands of today’s top brands. Through InMarket's data-driven marketing platform, brands can build targeted audiences, activate media in real time, and measure success in driving return on ad spend. InMarket's proprietary Moments offering outperforms traditional mobile advertising by 6x.* Our LCI attribution platform, which won the MarTech Breakthrough Award for Best Advertising Measurement Platform, was validated by Forrester to drive an average of $40 ROAS for our clients. 

*Source: Wordstream US Google Display Benchmarks for Mobile Media

About the Role

At InMarket, we are embarking on an ambitious journey to become an AI-first company, and we are looking for a Senior AI Systems Engineer to help lead this transformation. In this role, you will be the architect of a new era, designing and building the foundational systems that will enable our AI and ML products to flourish. Your work will involve deep technical challenges, requiring you to innovate and integrate cutting-edge AI technologies with our existing legacy systems. Your expertise will ensure that these complex systems operate seamlessly and efficiently, providing a robust platform for our AI-driven initiatives.

 

As a Senior AI Systems Engineer, you will not only be responsible for the technical execution but also for setting the strategic direction of our systems architecture. You will work closely with our data science teams, understanding the intricacies of their AI models and translating these into system requirements that support rapid iteration and deployment. Your role is critical in creating a safe and reliable environment for our AI applications, following the best practices of DevOps and MLOps. Your vision and leadership will guide the company through this pivotal transformation, ensuring that InMarket remains at the forefront of innovation in the AI space.

  Your daily Impact as Data Scientist

  • Design and implement a resilient systems architecture that supports the deployment and scaling of AI and ML models.
  • Lead the integration of new AI systems with existing legacy infrastructure, ensuring compatibility and operational excellence.
  • Develop and maintain infrastructure as code (IaC) to automate the provisioning and management of AI environments.
  • Construct and oversee CI/CD pipelines tailored for AI models, optimizing for efficiency and reliability.
  • Monitor system performance, implementing advanced logging, alerting, and anomaly detection mechanisms.
  • Stay at the forefront of AI technology and MLOps best practices, advocating for and implementing innovative solutions.
  • Collaborate with cross-functional teams to ensure AI systems align with business objectives and deliver exceptional user experiences.
  • Document architectural decisions, system configurations, and operational procedures to uphold system integrity and maintainability.

Your Experience & Expertise

  • Bachelor’s or Master’s degree in Computer Science, Engineering, or a related technical field.
  • 5+ years of experience in systems engineering with a strong emphasis on AI/ML infrastructure.
  • Deep understanding of DevOps and MLOps principles, methodologies, and tools.
  • Proficiency in programming languages such as Python, Scala, and scripting languages for automation.
  • Expertise in cloud platforms (AWS, GCP), container orchestration (Docker, Kubernetes), and infrastructure provisioning tools.
  • Experience with workflow management tools (Apache Airflow) and ML lifecycle management (MLflow, Kubeflow).
  • Familiarity with deploying and managing machine learning models.
  • Exceptional problem-solving capabilities, communication skills, and a collaborative team-player mindset.

Compensation & Benefits Summary

  • Competitive salary, stock options, flexible vacation
  • Medical, dental and Flexible Spending Account (FSA)
  • Company Matched 401(k)
  • Unlimited PTO (Within reason)
  • Talented co-workers and management
  • Agile Development Program (For continued learning/professional development)
  • Paid Paternity & Maternity Leave

For candidates in California, Colorado, and New York City, the Targeted Base Salary Range for this role is $160,000 to $220,000. 

Actual salaries will vary depending on factors including but not limited to work experience, specialized skills and training, performance in role, business needs, and job requirements. Base salary is subject to change and may be modified in the future. Base salary is just one component of InMarket’s total rewards package that also may include bonus, equity, and benefits.  Ask your recruiter for more information!

At InMarket we are committed to a culture that supports diversity, inclusion, belonging and equal opportunity. We celebrate all people and believe everyone deserves respect regardless of race, gender, sexual orientation, backgrounds, experiences, abilities or beliefs.

InMarket is an Equal Opportunity Employer (EOE). Qualified applicants are considered for employment without regard to age, race, color, religion, sex, national origin, sexual orientation, disability, or veteran status.

Privacy Notice for California Job Applicants: https://inmarket.com/ca-notice-for-job-applicants/

#LI-Remote

 

See more jobs at InMarket

Apply for this job

+30d

(Senior) Data-/Cloud-Engineer (m/w/d) // Remote möglich

Ebreuninger GmbHStuttgart, Germany, Remote
terraformairflowsqlazuredockerkubernetespythonAWS

Ebreuninger GmbH is hiring a Remote (Senior) Data-/Cloud-Engineer (m/w/d) // Remote möglich

Stellenbeschreibung

  • Als Cloud-/Data-Engineer (m/w/d) bei Breuninger entwickelst Du unsere neue Data Plattform kontinuierlich weiter
  • Du entwickelst und wartest ELT-/ETL-Prozesse mit Python, Airflow, Docker, DBT, … um neue Datenquellen zu erschließen
  • Du unterstützt andere Teams beim eigenständigen Entwickeln von Datenprozessen, u.a. durch die Entwicklung von Tools und Self-Service-Lösungen zur Administration sämtlicher Ressourcen von Breuninger auf der google cloud platform (GCP).
  • Du übernimmst DevOps-Aufgaben, automatisierst wiederkehrende Prozesse und entwickelst CICD-Pipelines mit Gitlab und Terraform

Qualifikationen

  • Du hast 1 Jahr Erfahrung in der Programmierung mit Python
  • Du hast ein grundsätzliches Verständnis von Datenbanken und Datenmodellierung, sowie SQL-Grundkenntnisse
  • Du kennst Dich mit dem Betrieb von ETL-/ETL-Strecken aus oder hast die Motivation Dich einzuarbeiten
  • Clean Code und test-driven development sind für Dich selbstverständlich
  • Du bringst gute Englischkenntnisse mit
  • Es wäre schön, aber nicht notwendig, wenn Du Erfahrungen mit einigen der folgenden Themen hättest:
    • GCP, AWS, Azure oder anderen Cloudprovidern
    • Big-Data-Technologien z.B. Apache-Spark, Apache Beam, Google Bigquery
    • Apache Airflow
    • dbt
    • Terraform, Docker, Kubernetes

Apply for this job

+30d

Senior Data Engineer

RemoteRemote-Southeast Asia
airflowsqljenkinspython

Remote is hiring a Remote Senior Data Engineer

About Remote

Remote is solving global remote organizations’ biggest challenge: employing anyone anywhere compliantly. We make it possible for businesses big and small to employ a global team by handling global payroll, benefits, taxes, and compliance. Check out remote.com/how-it-works to learn more or if you’re interested in adding to the mission, scroll down to apply now.

Please take a look at remote.com/handbook to learn more about our culture and what it is like to work here. Not only do we encourage folks from all ethnic groups, genders, sexuality, age and abilities to apply, but we prioritize a sense of belonging. You can check out independent reviews by other candidates on Glassdoor or look up the results of our candidate surveys to see how others feel about working and interviewing here.

All of our positions are fully remote. You do not have to relocate to join us!

The position

This is an exciting time to join Remote and make a personal difference in the global employment space as a Senior Data Engineer II, joining our Data team, composed of Data Analysts and Data Engineers. We support the decision making and operational reporting needs by being able to translate data into actionable insights to non-data professionals at Remote. We’re mainly using SQL, Python, Meltano, Airflow, Redshift, Metabase and Retool.

This role follows the Senior Engineer II role on the Remote Career paths.

What this job can offer you

  • Playing a key role in Data Platform Development & Maintenance:
    • Managing and maintaining the organization's data platform, ensuring its stability, scalability, and performance.
    • Collaboration with cross-functional teams to understand their data requirements and optimize data storage and access, while protecting data integrity and privacy.
    • Development and testing architectures that enable data extraction and transformation to serve business needs.
  • Improving further our Data Pipeline & Monitoring Systems:
    • Designing, developing, and deploying efficient Extract, Load, Transform (ELT) processes to acquire and integrate data from various sources into the data platform.
    • Identifying, evaluating, and implementing tools and technologies to improve ELT pipeline performance and reliability.
    • Ensuring data quality and consistency by implementing data validation and cleansing techniques.
    • Implementing monitoring solutions to track the health and performance of data pipelines and identify and resolve issues proactively.
    • Conducting regular performance tuning and optimization of data pipelines to meet SLAs and scalability requirements.
  • Dig deep into DBT Modelling:
    • Designing, developing, and maintaining DBT (Data Build Tool) models for data transformation and analysis.
    • Collaboration with Data Analysts to understand their reporting and analysis needs and translate them into DBT models, making sure they respect internal conventions and best practices.
  • Driving our Culture of Documentation:
    • Creating and maintaining technical documentation, including data dictionaries, process flows, and architectural diagrams.
    • Collaborating with cross-functional teams, including Data Analysts, SREs (Site Reliability Engineers) and Software Engineers, to understand their data requirements and deliver effective data solutions.
    • Sharing knowledge and offer mentorship, providing guidance and advice to peers and colleagues, creating an environment that empowers collective growth

What you bring

  • 5+ years of experience in data engineering; high-growth tech company experience is a plus
  • Strong experience with building data extraction/transformation pipelines (e.g. Meltano, Airbyte) and orchestration platforms (e.g. Airflow)
  • Strong experience in working with SQL, data warehouses (e.g. Redshift) and data transformation workflows (e.g. dbt)
  • Solid experience using CI/CD (e.g. Gitlab, Github, Jenkins)
  • Experience with data visualization tools (e.g. Metabase) is considered a plus
  • A self-starter mentality and the ability to thrive in an unstructured and fast-paced environment
  • You have strong collaboration skills and enjoy mentoring
  • You are a kind, empathetic, and patient person
  • Writes and speaks fluent English
  • It's not required to have experience working remotely, but considered a plus

Practicals

  • You'll report to: Engineering Manager - Data
  • Team: Data 
  • Location:For this position we welcome everyone to apply, but we will prioritise applications from the following locations as we encourage our teams to diversify; Vietnam, Indonesia, Taiwan and South-Korea
  • Start date: As soon as possible

Remote Compensation Philosophy

Remote's Total Rewards philosophy is to ensure fair, unbiased compensation and fair equitypayalong with competitive benefits in all locations in which we operate. We do not agree to or encourage cheap-labor practices and therefore we ensure to pay above in-location rates. We hope to inspire other companies to support global talent-hiring and bring local wealth to developing countries.

At first glance our salary bands seem quite wide - here is some context. At Remote we have international operations and a globally distributed workforce.  We use geo ranges to consider geographic pay differentials as part of our global compensation strategy to remain competitive in various markets while we hiring globally.

The base salary range for this full-time position is $39500-133400. Our salary ranges are determined by role, level and location, and our job titles may span more than one career level. The actual base pay for the successful candidate in this role is dependent upon many factors such as location, transferable or job-related skills, work experience, relevant training, business needs, and market demands. The base salary range may be subject to change.

 

Application process

  1. Interview with recruiter
  2. Interview with future manager
  3. Async exercise stage 
  4. Interview with team members 
  5. Prior employment verification check (Read more at remote.com/employment-checks)

 

 

Benefits

Our full benefits & perks are explained in our handbook at remote.com/r/benefits. As a global company, each country works differently, but some benefits/perks are for all Remoters:
  • work from anywhere
  • unlimited personal time off (minimum 4 weeks)
  • quarterly company-wide day off for self care
  • flexible working hours (we are async)
  • 16 weeks paid parental leave
  • mental health support services
  • stock options
  • learning budget
  • home office budget & IT equipment
  • budget for local in-person social events or co-working spaces

How you’ll plan your day (and life)

We work async at Remote which means you can plan your schedule around your life (and not around meetings). Read more at remote.com/async.

You will be empowered to take ownership and be proactive. When in doubt you will default to action instead of waiting. Your life-work balance is important and you will be encouraged to put yourself and your family first, and fit work around your needs.

If that sounds like something you want, apply now!

How to apply

  1. Please fill out the form below and upload your CV with a PDF format.
  2. We kindly ask you to submit your application and CV in English, as this is the standardised language we use here at Remote.
  3. If you don’t have an up to date CV but you are still interested in talking to us, please feel free to add a copy of your LinkedIn profile instead.

We will ask you to voluntarily tell us your pronouns at interview stage, and you will have the option to answer our anonymous demographic questionnaire when you apply below. As an equal employment opportunity employer it’s important to us that our workforce reflects people of all backgrounds, identities, and experiences and this data will help us to stay accountable. We thank you for providing this data, if you chose to.

See more jobs at Remote

Apply for this job

+30d

Data Engineer

AmpleInsightIncToronto, Canada, Remote
airflowsqlpython

AmpleInsightInc is hiring a Remote Data Engineer

Job Description

We are looking for a data engineer who is passionate about analytics and helping companies build and scale data. You enjoy working with data and are motivated to produce high quality data tools and pipelines that help empower other data scientists. You are experienced in architecting data ETL workflows and schemas. Critical thinking and problem-solving skills are essential for this role.

Qualifications

  • BS (or higher, e.g., MS, or PhD) in Computer Science, Engineering, Math, or Statistics
  • Hands on experience working with user engagement, social, marketing, and/or finance data
  • Proficient in Python (i.e. Pandas, Numpy, scikit-learn, etc), R, TensorFlow, amongst other data science related tools and libraries
  • Extensive experience working on relational databases, designing complex data schemas, and writing SQL queries
  • Deep knowledge on performance tuning of ETL Jobs, SQL, and databases
  • Working knowledge of Snowflake
  • Experience working with Airflow is a strong plus
  • Devops experiences is a plus

See more jobs at AmpleInsightInc

Apply for this job

+30d

Python Engineer - Analytics

AmpleInsightIncToronto, Canada, Remote
airflowsqlDesignpython

AmpleInsightInc is hiring a Remote Python Engineer - Analytics

Job Description

We are looking for an ambitious Python Engineer/Developer. You are passionate about technology but very pragmatic in the application of it to real-world engineering problems. You are experienced in launching new products and scaling them. Critical thinking and problem-solving skills are essential for this role.

As a Python Engineer, you will contribute in a multitude of ways, from architecting phenomenal systems, creating and encouraging good software development practices, driving strategic technical improvements, and mentoring other engineers.

At Ample Insight, you will have a unique opportunity to work with best-in-class engineers on large engineering problems, but in an environment with small teams and abundant opportunities for personal impact and growth. 

Please note that although this role is remote, you are required to be located in Canada/US.

Qualifications

Responsibilities

  • You will be part of a small but highly impactful team, with a large amount of ownership and autonomy for managing things directly
  • You will architect important systems and anticipate strategic and scaling-related challenges via thoughtful long-term planning
  • You will need to design, prototype, and create solutions that support highly reliable, scalable, performant AI and analytics products


Requirements

  • BS (or MS, or PhD) in Computer Science or related engineering field involving coding
  • 3+ years of professional software development experience
  • 3+ years of experience working with Python and data/ML related Python libraries such as Pandas, NumPy and scikit-learn
  • Hands on experience working with data and analytics relating to user engagement, social, marketing, and/or finance data
  • Extensive experience working on relational databases, designing complex data schemas, and writing SQL queries
  • Deep knowledge on performance tuning of ETL jobs, SQL, and databases
  • Solid CS fundamentals with thorough understanding of and demonstrated experience in Object-Oriented Design
  • Strong understanding of design patterns and capable of incorporating them in software design
  • Experience setting technical strategy for a large or important company initiative
  • Strong knowledge of shipping impactful and complex software projects
  • Experience working with Airflow is a strong plus
  • Devops experience is a plus

See more jobs at AmpleInsightInc

Apply for this job

+30d

Director, Data Engineering

ecobeeRemote in Canada
agileterraformairflowDesigndockerAWS

ecobee is hiring a Remote Director, Data Engineering

Hi, we are ecobee. 

ecobee introduced the world’s first smart Wi-Fi thermostat to help millions of consumers save money, conserve energy, and bring home automation into their lives. That was just the beginning. We continue our pursuit to create technology that brings peace of mind into the home and allows people to focus on the moments that matter most. We take pride in making a meaningful difference to the environment, all while being part of the exciting, connected home revolution. 

In 2021, ecobee became a subsidiary of Generac Power Systems.Generac introduced the first affordable backup generator and later created the category of automatic home standby generator. The company is committed to sustainable, cleaner energy products poised to revolutionize the 21st century electrical grid. Together,we take pride in making a meaningful difference to the environment.

Why we love to do what we do: 

We’re helping build the world of tomorrow with solutions that improve everyday life while making a positive impact on the planet. Our products and services work in harmony to provide comfort, efficiency, and peace of mind for millions of homes and businesses. While we’re proud of what we’ve done so far, there’s still a lot we can do—and you can be part of it.  

Join our extraordinary team. 

We're a rapidly growing global tech company headquartered in Canada, in the heart of downtown Toronto, with a satellite office in Leeds, UK (and remote ecopeeps in the US). We get to work with some of North America and UK's leading professionals. Our colleagues are proud to bring their authentic selves to work, confident that what we do is grounded in a greater purpose. We’re always looking for curious, talented, and passionate people to join our team.

This role is open to being 100% remote within Canada, although our home office is located in Toronto, Ontario. You’ll be required to travel to Toronto at a minimum once per quarter for team and/or company events.

Who You’ll be Joining: 

As the Director of Data Engineering, you’ll be joining our VP of Data Science and the greater Data Science Management team here at ecobee, as you lead your team of Data Engineers in building our next generation data platform.

You and your team will lead ecobee in the migration to a data-product culture and in doing so be responsible for defining and building a data-mesh architecture that governs and supports both new and existing data products across all our business domains.

How You’ll Make an Impact:

Your extensive knowledge in data engineering will go towards building an organizational data structure and system architecture that empowers our teams with the ability to intuitively build a cohesive data ecosystem, govern their own data and self-serve their ongoing needs with insights in real-time.

You’ll lead from the front, leveraging you and your team’s strong engineering capabilities to build the core data platform and champion new standards in data development, enabling your team and others to build data products that are discoverable, interoperable, addressable, and secure.

As a Director of Data Engineering at ecobee you will;

  • Foster a positive, supportive, and inclusive work environment.
  • Hire and develop a team of data engineers — providing them coaching, mentoring, motivation, and technical guidance.
  • Build high-quality, efficient, and scalable data infrastructure.
  • Bring big-picture thinking to our organizational data strategy, providing guidance to data, engineering, and product teams.
  • Partner with domain teams to migrate their existing data products to new data infrastructure and/or governance platform.
  • Lead data architecture design that reduces complexity and enables extendibility and reusability.
  • Continuously improve engineering practices — balancing speed, quality, and business impact.
  • Build effective agile practices that deliver robust solutions on time and on budget.
  • Lead the execution of project plans, delivery commitments, and risk mitigation.
  • Help evaluate the feasibility of initiatives through quick prototyping with respect to performance, quality, time, and cost.
  • Build strong partnerships with cross-functional teams to contribute to, and deliver unique customer experiences.
  • Thrive in a fast-paced, ambiguous, and high-stakes environment.

What You’ll Bring to the Table:

We've built the below list as a guideline for some of the skills and interests of our development team - but we strive to build our team with members from diverse backgrounds and skill sets, so if any combination of theseappliesto you we'd love to chat!

  • A strong background in Data Engineering and/or Data Architecture that extends out to modern cloud infrastructure and data-mesh/data-fabric principles.
  • Hands-on engineering experience, and the on-going willingness to engage in hands-on development.
  • The ability to champion projects, educate and inspire cross-functional teams and stakeholders (both technical and non-technical) using excellent verbal and written communication skills.
  • Experience managing engineering teams empathetically and effectively.
  • Experience with Agile and other program management methodologies.
  • Strategies that proactively identify upcoming risks, issues, and bottlenecks both within your team and across departmental boundaries.
  • A curious, analytical mentality with a bias towards taking action.
  • Experience with some of the following technologies: GCP, AWS, Big Query, Dataflow, Airflow, Matillion, SiSense, Terraform, Docker

Just so you know: The successful candidate will be required to complete a background check. 

What happens after you apply?

Application Review. It will happen. By an actual person in Talent Acquisition. We get upwards of 100+ application for some roles, it can take a few days, but every applicant can expect a note regarding their application status.

Interview Process (4 Rounds)

  • Round 1: A 45-minute phone call with a member of Talent Acquisition.
  • Round 2: A 1-hour virtual meeting with the VP of Data Science. This interview has a values and leadership focus.
  • Round 3: A 1-hour virtual meeting with a cross-functional team. This interview has a technical focus.
  • Round 4: A 1-hour virtual meeting with senior leaders from two teams you’ll work closely with in a cross-functional capacity.

With ecobee, you’ll have the opportunity to: 

  • Be part of something big: Get to work in a fresh, dynamic, and ever-growing industry.  
  • Make a difference for the environment: Make a sustainable impact while on your daily job, and after it through programs like ecobee acts. 
  • Expand your career: Learn with our in-house learning enablement team, and enjoy our generous professional learning budget. 
  • Put people first: Benefit from competitive salaries, health benefits, and a progressive Parental Top-Up Program (75% top-up or five bonus days off). 
  • Play a part on an exceptional culture: Enjoy a fun and casual workplace with an open concept office, located at Corus Quay.ecobeeLeeds is based at our riverside office on the Calls. 
  • Celebrate diversity: Be part of a truly welcoming workplace. We offer a mentorship program and bias training.  

Are you interested? Let's make it work. 

Our people are empowered to take ownership of their schedules with workflows that allow for flexible hours. Based on your job, you have an option of a office-based, fully remote, or hybrid work environment. New team members working remotely, will have all necessary equipment provided and shipped to them, and we conduct our interviews and onboarding sessions primarily through video.

We’re committed to inclusion and accommodation. 

ecobee believes that openness and diversity make us better. We welcome applicants from all backgrounds to apply regardless of race, gender, age, religion, identity, or any other aspect which makes them unique. Accommodations can be made upon request for candidates taking part in all aspects of the selection process. Our recruitment team is happy to answer any questions candidates may have about virtual interviewing, onboarding, and future work locations.

We’re up to incredible things. Come and be part of them. 

Discover our products and services and learn more about who we are.  

Ready to join ecobee? View current openings. 

Please note, ecobee does not accept unsolicited resumes.  

Apply for this job

+30d

Analytics Engineer

tableauairflowpostgressalesforcec++elasticsearchmysqlpython

BetterCloud is hiring a Remote Analytics Engineer

BetterCloud is the market leader for SaaS Operations, enabling IT professionals to transform their employee experience, maximize operational efficiency, and centralize data protection. With no-code automation enabling zero-touch workflows, thousands of forward-thinking organizations like Twitch, Oscar Health, and Cloud Factory now rely on BetterCloud to automate processes and policies across their cloud application portfolio.

With 10+ years of experience pioneering the SaaS Operations movement, BetterCloud now serves the world’s largest community of SaaSOps experts. As host of Altitude, the industry’s leading SaaSOps event, and publisher of The State of SaaSOps Report, the category’s definitive market research, BetterCloud is recognized by customers (G2) and leading analyst firms (Gartner and Forrester) as the market leader in SaaS Operations. 

This role is remote and/or out of our new and exciting Mexico City Office. Mexico City is our first office outside of the US, and you’ll be one of the first employees there, helping us build the office from scratch and shape an amazing culture. BetterCloud is backed, among others, by some of the best technology investors Vista Equity Partners, Warburg Pincus, Bain Capital, and Accel.

About You

  • Bachelor’s degree in engineering,math, finance, statistics, economics, information technology, or related discipline 
  • Proven analytical and quantitative skills and ability to use hard data and metrics to back up assumptions and develop business cases
  • 3-5 years in the analysis space as a Data Analyst, FP&A Analyst, or Analytics/Business Intelligence/Data Engineer
  • 2+ years experience in Tableau/Looker/PowerBI Data Visualization creating dashboards 
  • 2+ years Python experience, specifically for the automation of data retrieval from data sources such as third party systems, APIs, document stores, etc…
  • 2+ years experience with any data warehouse ( BigQuery, Redshift, Postgres, etc..)
  • Experience with any orchestration tool (Airflow, Dagster, Prefect, etc..)

What You’ll Do 

  • Drive metric generation and Tableau dashboard creation to provide key stakeholders with actionable insights across customer success, engineering, and the entire BetterCloud organization
  • Facilitate the transfer of data from a variety of sources to BigQuery (for example: Kafka, ElasticSearch, MySQL, Salesforce, Our platform (BetterCloud)).
  • Help maintain and improve existing infrastructure that powers our existing analytics and reporting technology stack by keeping up to date with the current data analytics landscape.
  • Model data at rest in BigQuery to enable powerful data analysis via Tableau.
  • Be an advocate for best practices and continued learning. 
  • Help drive the business and product architecture of the organization.

Goals

In your first week, you will have…

  • Completed our 4-day universal onboarding program, BetterBeginnings 
  • Met with your manager 1:1
  • Met your team
  • Gained access to the tools and resources necessary to be successful in your new role 

In your first 30 days, you will have…

  • Completed your department’s functional onboarding program 
  • Met and collaborated with your team 
  • Identified projects and tasks that you’ll dive into moving forward 

What We Offer

  • Hybrid work model with up to 2 days per week working from home*
  • Generous PTO policy plus paid mental health days
  • Seguro de Gastos Médicos Mayores, Seguro de Asistencia Médica, Vision Insurance, Dental Insurance, Life Insurance and dedicated mental health resources
  • Vales de despensa
  • Financial wellness support and one-time WFH stipend
  • Plus more… Think events, killer swag, and a strong BetterCloud Community!

 

BetterCloud is an Equal Opportunity Employer, including disabled and vets.

*Remote - This role is not eligible for remote employees. Employees must be based in the Mexico City area. 

#LI-Hybrid 

 

 

See more jobs at BetterCloud

Apply for this job

+30d

Software Engineer - Infrastructure

CloudflareRemote US
airflowpostgressqlansiblec++dockerpostgresqlmysqlkuberneteslinuxpython

Cloudflare is hiring a Remote Software Engineer - Infrastructure

About Us

At Cloudflare, we have our eyes set on an ambitious goal: to help build a better Internet. Today the company runs one of the world’s largest networks that powers approximately 25 million Internet properties, for customers ranging from individual bloggers to SMBs to Fortune 500 companies. Cloudflare protects and accelerates any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks. Cloudflare was named to Entrepreneur Magazine’s Top Company Cultures list and ranked among the World’s Most Innovative Companies by Fast Company. 

We realize people do not fit into neat boxes. We are looking for curious and empathetic individuals who are committed to developing themselves and learning new skills, and we are ready to help you do that. We cannot complete our mission without building a diverse and inclusive team. We hire the best people based on an evaluation of their potential and support them throughout their time at Cloudflare. Come join us! 

Available Locations:Remote - US, Eastern or Central-based timezone

About the Role

An engineering role at Cloudflare provides an opportunity to address some big challenges, at scale.  We believe that with our talented team, we can solve some of the biggest security, reliability and performance problems facing the Internet. Just how big?  

  • We have in excess of 15 Terabits of network transit capacity
  • We operate 250 Points-of-presence around the world
  • We serve more traffic than Twitter, Amazon, Apple, Instagram, Bing, & Wikipediacombined
  • Anytime we push code, it immediately affects over 200 million internet users
  • Every day, up to 20,000 new customers sign-up for Cloudflare service
  • Every week, the average Internet user touches us more than 500 times

We are looking for talented Software Engineers to build and develop the platform which makes Cloudflare customers place their trust in us.  Our Software Engineers come from a variety of technical backgrounds and have built up their knowledge working in different environments. But the common factors across all of our reliability-focused engineers include a passion for automation, scalability, and operational excellence.  Our Infrastructure Engineering team focuses on the automation to scale our infrastructure.

Our team is well-funded and focused on building an extraordinary company.  This is a superb opportunity to join a high-performing team and scale our high-growth network as Cloudflare’s business grows.  You will build tools to constantly improve our scale and speed of deployment.  You will nurture a passion for an “automate everything” approach that makes systems failure-resistant and ready-to-scale.   

Cloudflare Software Engineers focus on automating our infrastructure installations and decommissions at scale.  We enable our Data Centre Engineering teams by allowing them to install new data centers, replace servers and networking in existing data centers as quickly and efficiently as possible while not impacting existing infrastructure and customer services.  While our focus is on automating all infrastructure requirements, there is an element of ongoing operational support of Data Center Engineers and other teams.  We also review upcoming hardware changes and update automation and configuration management to cater to these advances.

Many of our Software Engineers have had the opportunity to work at multiple offices on interim and long-term project assignments. The ideal Software Engineering candidate has a passionate curiosity about how the Internet fundamentally works and has a strong knowledge of Linux and Hardware.  We require strong coding ability in Bash, Python or Go. We prefer to hire experienced candidates; however raw skill trumps experience and we welcome strong junior applicants.

Requisite Skills

  • Intermediate level software development skills in Python and Go
  • Linux systems administration experience
  • 5 years of relevant Development experience
  • Strong skills in network services, including Rest APIs and HTTP

Examples of desirable skills, knowledge and experience

  • 5 years of relevant work experience
  • Strong tooling and automations development experience
  • Network fundamentals DHCP, ARP, subnetting, routing, firewalls, IPv6
  • Configuration management systems such as Saltstack, Chef, Puppet or Ansible
  • Load balancing and reverse proxies such as Nginx, Varnish, HAProxy, Apache
  • SQL databases (Postgres or MySQL)
  • Time series databases (OpenTSDB, Graphite, Prometheus)
  • The ability to understand service and device metrics and visualize them using Grafana
  • Key/Value stores (Redis, KyotoTycoon, Cassandra, LevelDB)

Bonus Points

  • Experience programming in C, C++, Rust or Go
  • Experience with continuous / rapid release engineering
  • Experience developing systems that are highly available and redundant across regions
  • High-bandwidth transit Internetworking and routing experience
  • Performance analysis and debugging with tools like perf, sar, strace, dtrace
  • Experience with the Linux kernel and Linux software packaging
  • Internetworking and BGP

Some tools that we use

  • Apache Airflow 
  • Salt
  • Netbox
  • Docker
  • Kubernetes
  • Nginx
  • Python
  • PostgreSQL
  • Redis
  • Prometheus

Compensation

Compensation may be adjusted depending on work location.

  • For Colorado-based hires: Estimated annual salary of $137,000 - $152,000.
  • For New York City, Washington, and California (excluding Bay Area) based hires:): Estimated annual salary of $154,000 - $171,000
  • For Bay Area-based hires: Estimated annual salary of $162,000 - $180,000.

Equity

This role is eligible to participate in Cloudflare’s equity plan.

Benefits

Cloudflare offers a complete package of benefits and programs to support you and your family.  Our benefits programs can help you pay health care expenses, support caregiving, build capital for the future and make life a little easier and fun!  The below is a description of our benefits for employees in the United States, and benefits may vary for employees based outside the U.S.

Health & Welfare Benefits

  • Medical/Rx Insurance
  • Dental Insurance
  • Vision Insurance
  • Flexible Spending Accounts
  • Commuter Spending Accounts
  • Fertility & Family Forming Benefits
  • On-demand mental health support and Employee Assistance Program
  • Global Travel Medical Insurance

Financial Benefits

  • Short and Long Term Disability Insurance
  • Life & Accident Insurance
  • 401(k) Retirement Savings Plan
  • Employee Stock Participation Plan

Time Off

  • Flexible paid time off covering vacation and sick leave
  • Leave programs, including parental, pregnancy health, medical, and bereavement leave

What Makes Cloudflare Special?

We’re not just a highly ambitious, large-scale technology company. We’re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.

Project Galileo: We equip politically and artistically important organizations and journalists with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost.

Athenian Project: We created Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration.

Path Forward Partnership: Since 2016, we have partnered with Path Forward, a nonprofit organization, to create 16-week positions for mid-career professionals who want to get back to the workplace after taking time off to care for a child, parent, or loved one.

1.1.1.1: We released 1.1.1.1to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released. Here’s the deal - we don’t store client IP addresses never, ever. We will continue to abide by our privacy commitmentand ensure that no user data is sold to advertisers or used to target consumers.

Sound like something you’d like to be a part of? We’d love to hear from you!

This position may require access to information protected under U.S. export control laws, including the U.S. Export Administration Regulations. Please note that any offer of employment may be conditioned on your authorization to receive software or technology controlled under these U.S. export laws without sponsorship for an export license.

Cloudflare is proud to be an equal opportunity employer.  We are committed to providing equal employment opportunity for all people and place great value in both diversity and inclusiveness.  All qualified applicants will be considered for employment without regard to their, or any other person's, perceived or actual race, color, religion, sex, gender, gender identity, gender expression, sexual orientation, national origin, ancestry, citizenship, age, physical or mental disability, medical condition, family care status, or any other basis protected by law.We are an AA/Veterans/Disabled Employer.

Cloudflare provides reasonable accommodations to qualified individuals with disabilities.  Please tell us if you require a reasonable accommodation to apply for a job. Examples of reasonable accommodations include, but are not limited to, changing the application process, providing documents in an alternate format, using a sign language interpreter, or using specialized equipment.  If you require a reasonable accommodation to apply for a job, please contact us via e-mail athr@cloudflare.comor via mail at 101 Townsend St. San Francisco, CA 94107.

See more jobs at Cloudflare

Apply for this job

+30d

Senior Data Engineer

carsalesSydney, Australia, Remote
terraformairflowsqlpythonAWS

carsales is hiring a Remote Senior Data Engineer

Job Description

What you will be doing

We're looking for a Senior Data Engineer to help build the infrastructure that powers the optimisation loop at the heart of our business: analytics and machine learning. You'll be joining an awesome team of engineers and world-class ad-tech experts.

  • Building ETL pipelines (Airflow, Apache Beam, SQL, etc) to ingest data.
  • Designing and maintaining our data lake and associated infrastructure using Terraform.
  • Working with business and technical teams to interpret data.
  • Identifying data quality issues and proactively developing quality strategies.
  • Building pipelines for machine learning models.

Qualifications

What you bring to the role

  • Experience with cloud data engineering platforms. We use GCP, but Databricks, AWS, etc are all good experience.
  • Experience with the Python programming language: you are able to define pipelines in Python and maintain simple webservices where required.
  • Ideally, you have experience deploying your pipelines on an Infrastructure-as-Code basis. We use Terraform, but CloudFormation/CDK are a good background.
  • Good understanding of SQL, ideally with some data warehousing/data lake experience.

See more jobs at carsales

Apply for this job

+30d

Sr. Data Engineer - Data Analytics

R.S.ConsultantsPune, India, Remote
Bachelor's degreescalaairflowsqlDesigntypescriptpythonAWSNode.js

R.S.Consultants is hiring a Remote Sr. Data Engineer - Data Analytics

Job Description

We are looking for a Sr. Data Engineer for an International client. This is a 100% remote job. The person will be working from India and will be collaborating with global team. 

Total Experience: 7+ Years

Your role

  • Have key responsibilities within the requirements analysis, scalable & low latency streaming platform solution design, architecture, and end-to-end delivery of key modules in order to provide real-time data solutions for our product
  • Write clean scalable code using Go, Typescript / Node.js / Scala / python / SQL and test and deploy applications and systems
  • Solve our most challenging data problems, in real-time, utilizing optimal data architectures, frameworks, query techniques, sourcing from structured and unstructured data sources.
  • Be part of an engineering organization delivering high quality, secure, and scalable solutions to clients
  • Involvement in product and platform performance optimization and live site monitoring
  • Mentor team members through giving and receiving actionable feedback.

Our tech. stack:

  • AWS (Lambda, SQS, Kinesis, KDA, Redshift, Athena, DMS, Glue,Go/Typescript, Dynamodb), Airflow, Flink, Spark, Looker, EMR
  • A continuous deployment process based on GitLab

A little more about you:

  • A Bachelor's degree in a technical field (eg. computer science or mathematics). 
  • 3+ years experience with real-time, event driven architecture
  • 3+ years experience with a modern programming language such as Scala, Python, Go, Typescript
  • Experience of designing complex data processing pipeline
  • Experience of data modeling(star schema, dimensional modeling etc)
  • Experience of query optimisation
  • Experience of kafka is a plus
  • Shipping and maintaining code in production
  • You like sharing your ideas, and you're open-minded

Why join us?

???? Key moment to join in term of growth and opportunities

????‍♀️ Our people matter, work-life balance is important

???? Fast-learning environment, entrepreneurial and strong team spirit

???? 45+ Nationalities: cosmopolite & multi-cultural mindset

???? Competitive salary package & benefits (health coverage, lunch, commute, sport

DE&I Statement: 

We believe diversity, equity and inclusion, irrespective of origins, identity, background and orientations, are core to our journey. 

Qualifications

Hands-on experience in Scala / Python with Data Modeling, Real Time / Streaming Data. Experience of complex data processing pipeline and Data Modeling.

BE/ BTech in Computer Science

See more jobs at R.S.Consultants

Apply for this job

+30d

Databricks Data Engineer - Data & Analytics team (remote / Costa Rica- or LATAM-based)

HitachiSan Jose, Costa Rica, Remote
scalaairflowsqlDesignazuregitpythonAWS

Hitachi is hiring a Remote Databricks Data Engineer - Data & Analytics team (remote / Costa Rica- or LATAM-based)

Job Description

 

Please note: Although our position is primarily remote / virtual (could be some occasional onsite in downtown San Jose, should you live close enough) you MUST live, and be authorized to work, in Costa Rica without sponsorship. Candidates in other Latin America (LATAM) countries can be considered as an employee if willing to relocate to Costa Rica or can work via our 3rd party payroll company.

 

DATA ENGINEER (DATABRICKS, PYTHON, SPARK) 

This is a full-time, well benefited, career opportunity in our Data & Analytics organization (Azure DataWarehouse / DataLakehouse and Business Intelligence) for a highly experienced Data Engineer in Big Data systems design with hnads-on knowledge in data architecture, especially Spark and Delta/Data Lake technologies.

Individuals in this role will assist in the design, development, enhancement, and maintenance of complex data pipelines products that manage business critical operations, and large-scale analytics pipelines.   Qualified applicants will have a demonstrated capability to learn new concepts quickly, have a data engineering background, and/or have robust software engineering expertise.  

Responsibilities

  • Scope and execute together with team leadership. Work with the team to understand platform capabilities and how to best improve and expand those capabilities.
  • Strong independence and autonomy.
  • Design, development, enhancement, and maintenance of complex data pipeline products which manage business-critical operations and large-scale analytics applications.
  • Experience leading mid- and senior-level data engineers. 
  • Support analytics, data science and/or engineering teams and understand their unique needs and challenges. 
  • Instill excellence into the processes, methodologies, standards, and technology choices embraced by the team.
  • Embrace new concepts quickly to keep up with fast-moving data engineering technology.
  • Dedicate time to continuous learning to keep the team appraised of the latest developments in the space.
  • Commitment to developing technical maturity across the company.

Qualifications

  • 5+ years of Data Engineering experience including 2+ years designing and building Databricks data pipelines is REQUIRED; Azure cloud is highly preferred, however will consider AWS, GCP or other cloud platform experience in lieu of Azure
  • Experience with conceptual, logical and/or physical database designs is a plus
  • 2+ years of hands-on Python/Pyspark/SparkSQL and/or Scala experience is REQUIRED
  • 2+ years of experience with Big Data pipelines or DAG Tools (Data Factory, Airflow, dbt, or similar) is REQUIRED
  • 2+ years of Spark experience (especially Databricks Spark and Delta Lake) is REQUIRED
  • 2+ years of hands-on experience implementing Big Data solutions in a cloud ecosystem, including Data/Delta Lakes, is REQUIRED
  • Experience with source control (git) on the command line is REQUIRED
  • 2+ years of SQL experience, specifically to write complex, highly optimized queries across large volumes of data is HIGHLY DESIRED
  • Data modeling / data profiling capabilities with Kimball/star schema methodology is a plus
  • Professional experience with Kafka, or other live data streaming technology, is HIGHLY DESIRED
  • Professional experience with database deployment pipelines (i.e., dacpac’s or similar technology) is HIGHLY DESIRED
  • Professional experience with one or more unit testing or data quality frameworks is HIGHLY DESIRED

#LI-CA1

#REMOTE

#databricks

#python

#spark

#dataengineer

#datawrangler

Apply for this job

+30d

Senior Data Engineer

seedtagMadrid, ES Remote
scalaairflowsqlmongodbkuberneteslinuxpython

seedtag is hiring a Remote Senior Data Engineer

We are looking for a talented Senior Data Engineerto help us change the world of digital advertising together.

WHO WE ARE

At Seedtag our goal is to lead the change in the advertising industry, because we believe that effective advertising should not be at odds with users’ privacy.

By combining Natural Language Processing and Computer Vision our proprietary, Machine Learning-based technology provides a human-like understanding of the content of the web that finds the best context for each ad while providing unparalleled risk-mitigation capabilities that protect advertisers from showing their ads on pages that could be damaging for their brand. All of this, without relying on cookies or any other tracking mechanisms.

Every day, our teams develop new services that reach over 200 million users worldwide with fast response times to ensure that we deliver the best user experience. We’re fully committed to the DevOps culture, where we provide the platform that our Software Developers and Data Scientists use to manage over 100 different microservices, pushing dozens of changes to production every day. All of this is built on top of Kubernetes in Google Cloud Platform and Amazon Web Services.

If you are interested in joining one of the fastest growing startups in Europe and work on massive scalability challenges, this is the place for you.

KEY FIGURES

2014 · Founded by two ex-Googlers

2018 · 16M total turnover & Internationalization & Getting growth

2021 · Fundraising round of 40M€ & +10 countries & +230 Seedtaggers

2022 ·Fundraising round of 250M€ + expansion into the U.S market

ABOUT YOU

Your key responsibilities will be:

  • You will be a key player in the development of a reliable data architecture for ingestion, processing, and surfacing of data for large-scale applications
  • You will cooperate with other teams to unify data sources, as well as recommend and implement ways to improve data reliability, quality and integrity.
  • You will start by processing data from different sources using tools such as SQL, MongoDB, and Apache Beam, and will be exploring and proposing new methods and tools to acquire new data.
  • You will work with data science and data analytics teams, to help them improve their processes by building new tools and implementing best practices
  • You will ensure continuous improvement in delivery, applying engineering best practices to development, monitoring, and data quality of the data pipelines.

We're looking for someone who:

  • You have at least 5 years of solid experience in Data Engineering
  • You have a degree in Computer Science, Engineering, Statistics, Mathematics, Physics or another degree with a strong quantitative component.
  • You are comfortable with object-oriented languages, such as Python or Scala, and you are fluent in working with a Linux terminal and writing basic bash scripts.
  • You have ample experience with Data Engineering tools such as Apache Beam, Spark, Flink or Kafka.
  • You have experience orchestrating ETL processes using systems such as Apache Airflow, and managing databases like SQL, Hive or MongoDB.
  • You are a proactive person who likes the dynamic startup work culture

WHAT WE OFFER

  • ???? Key moment to join Seedtag in terms of growth and opportunities
  • ???? High-performance tier salary bands excellent compensation
  • ???? One Seedtag: Work for a month from any of our open offices with travel and stay paid if you’re a top performer (think of Brazil, Mexico..., ????️)
  • ???? Paid travels to our HQ in Madrid to work p2p with your squad members
  • ???? Macbook Pro M1
  • ???? Build your home office with a budget of up to 1K€ (external screen, chair, table...)
  • ⌛ Flexible schedule to balance work and personal life
  • ⛰️ An unlimited remote working environment, where you can choose to work from home indefinitely or attend our Madrid headquarters whenever you want, where you will find a great workplace location with food, snacks, great coffee, and much more.
  • ????️ A harassment-free, supportive and safe environment to ensure the healthiest and friendliest professional experience fostering diversity at all levels.
  • ???????? ???????? Optional company-paid English and/or Spanish courses.
  • ???? Access to learning opportunities (learning & development budget)
  • ???? We love what we do, but we also love having fun. We have many team activities you can join and enjoy with your colleagues! A Yearly offsite with all the company, team offsites, and Christmas events...
  • ????️???? Access to a flexible benefits plan with restaurant, transportation, and kindergarten tickets and discounts on medical insurance

Are you ready to join the Seedtag adventure? Then send us your CV!

See more jobs at seedtag

Apply for this job

+30d

Senior Data Engineer

AxiosRemote
agileterraformairflowsqlDesignc++pythonAWS

Axios is hiring a Remote Senior Data Engineer

Quick take: Axios is a growth-stage company dedicated to providing trustworthy, award-winning news content in an audience-first format. We’re hiring a remote Senior Data Engineer to join our Consumer Insights data team! 

Why it matters:As a Senior Data Engineer, this person will collaborate with other data engineers, scientists, analysts, and product managers to drive forward data initiatives across mission-critical Axios products. The team is responsible for analyzing consumer behavior, preferences, and feedback, to allow Axios to tailor products, services, and marketing strategies effectively.

Go deeper:As a Senior Data Engineer, you will play a leadership role in building and delivering solutions to problems in an intelligent and nuanced way. In this role, you will make an impact on Axios through the following responsibilities:

  • Architect and build data products and features that provide consumer insights about Axios’ audience
  • Hands-on development and execution against the team’s roadmap in collaboration with other data engineers, analysts, scientists, and quality engineers. 
  • Technical and architectural decision-making
  • Develop and maintain data pipelines and warehouses to support Axios in data-informed decision-making 
  • Writing clean, well-documented, and well-tested code primarily in SQL/Python
  • Provide technical insights, and feasibility assessments, communicate technical constraints to the team’s Product Manager
  • Estimate efforts of technical implementation to aid in planning and sequencing of developmental tasks
  • Mentoring less experienced members of the team through pair programming and empathetic code review
  • Share knowledge through presenting at data chapter meetings and demoing to team members and stakeholders
  • Staying up to date with industry trends and collaborating on best practices

The details: Ideal candidate should have an entrepreneurial spirit, be highly collaborative, exhibit a passion for building technology products, and have the following qualifications:

  • Experience with or knowledge of Agile Software Development methodologies
  • Experience building data applications in languages such as (but not limited to) Python, SQL, Bash, Jinja, Terraform 
  • Experience designing, building, and maintaining data pipelines to produce insights
  • Experience with functional design and dimensional data modeling
  • Experience with DBT and semantic data models
  • Experience with data pipeline development and data orchestration systems such as Airflow
  • Practical experience with columnar data warehouses, such as Redshift
  • Experience working with CI/CD pipelines and understanding best deployment practices for data products 
  • Proven ability to ship high-quality, testable, and accessible code quickly
  • Experience working in and around cloud providers such as AWS

 Bonus experiences:

  • Experience working in and around AWS data services
  • Experience working with Data Scientists, Machine Learning Engineers or supporting MLOps 
  • Experience working with MapReduce and Spark clusters
  • Experience successfully working with data product managers
  • Experience working in Media 

Don’t forget:

  • Competitive salary
  • Health insurance (100% paid for individuals, 75% for families)
  • Primary caregiver 12-week paid leave
  • 401K
  • Generous vacation policy, plus company holidays
  • A commitment to an open, inclusive, and diverse work culture
  • Annual learning and development stipend

Additional pandemic-related benefits:

  • One mental health day per quarter
  • $100 monthly work-from-home stipend
  • Company-sponsored access to Ginger coaching and mental health support 
  • OneMedical membership, including tele-health services 
  • Increased work flexibility for parents and caretakers 
  • Access to the Axios “Family Fund”, which was created to allow employees to request financial support when facing financial hardship or emergencies 
  • Class pass discount
  • Virtual company-sponsored social events

Starting salary for this role is in the range of $140,000 - $190,000 and is dependent on numerous factors, including but not limited to location, work experience, and skills. This range does not include other compensation benefits.

Equal Opportunity Employer Statement

Axios is an equal opportunity employer that is committed to diversity and inclusion in the workplace. We prohibit discrimination and harassment of any kind based on race, color, sex, religion, sexual orientation, age, gender identity, gender expression, veteran status, national origin, disability, genetic information, pregnancy, or any other protected characteristic as outlined by federal, state, or local laws.

This policy applies to all employment practices within our organization, including hiring, recruiting, promotion, termination, layoff, recall, leave of absence, compensation, benefits, training, and apprenticeship. Axios makes hiring decisions based solely on qualifications, merit, and business needs at the time.

See more jobs at Axios

Apply for this job

+30d

Senior Software Engineer, Back-End (Content)

Muck RackRemote
airflowpostgresvuec++elasticsearchmysqlpythonAWSfrontend

Muck Rack is hiring a Remote Senior Software Engineer, Back-End (Content)

Muck Rack is the leading SaaS platform for public relations and communications professionals. Our mission is to enable organizations to build trust, tell their stories and demonstrate the unique value of earned media. Muck Rack’s Public Relations Management (PRM) platform enables organizations to build relationships with the media, manage crisis risk and demonstrate PR’s impact on business outcomes.

Founder controlled, fully distributed, and growing sustainably, Muck Rack has received several awards for its unparalleled culture and product from organizations like Inc., Quartz, G2, and BuiltIn. We value resilience, transparency, ownership, & customer devotion and infuse these values into everything we do.

We’re looking for a collaborative and self-motivated senior software engineer, back-end to join our quickly growing team and make a big impact.

As a senior software engineer on the Content Team, you’ll work closely with software engineers, product managers, and designers, to ensure that the content available to our customers meets their expectations.  You’ll work on major technical projects with large data volumes, lead the building of new features, and help shape our engineering culture and processes. Our engineers are not siloed to any particular part of the application–everyone contributes everywhere. Our tech stack includes Python, Django, Celery, MySQL, Elasticsearch, Vue, and Webpack. Our technology team is focused on scale, quality, delivery, and thoughtful customer experience. We ship frequently without sacrificing work/life balance.

To be set up for success in this role, you’ll need to have:

  • 5+ years total professional experience as a software engineer
  • Django or significant web experience in a similar framework
  • The ability to understand complex data processes in a distributed service architecture
  • A history of consistently delivering value to customers and internal stakeholders

If any of the below also describe you, this could be an exciting opportunity:

  • Worked on a complex, high-traffic site at a startup or software-as-a-service company, ideally with large amounts of data
  • Experience with MySQL (or Postgres) and/or ElasticSearch
  • Any combination of the following: experience with Celery, Luigi or Airflow, Kafka, AWS, NLP, data model performance tuning, content extraction, application performance tuning
  • Familiarity with modern frontend frameworks (like Vue or React) and development patterns
  • Interest in journalism, news, media or social media
  • Worked in highly collaborative and cross-functional environments. 

In addition, we’re always looking for candidates who:

  • Have excellent communication skills, with an ability to explain ideas clearly, give and receive feedback, and work well with team members
  • Exhibit a willingness to learn in areas where they have less experience with our tech stack
  • Take pride in the quality of their code. (Your code should be  readable, testable, and understandable years later. You adhere to the Zen of Python)
  • Work well in a fast-paced development environment with testing, continuous integration and multiple daily deploys
  • Have the ability to manage complexity in a large project, and incur technical debt only after considering the tradeoffs
  • Take a logical approach to problem solving that combines analytical thinking and intuition

Interview Overview:

Below you'll find an outline of the interview plan for this role. Please note that this is what we expect the process to look like; we may ask you for supplemental information or require an additional step before making a final decision.

  • 30 min interview with a member of our Talent Team
  • A 1 hour zoom interview with the hiring manager
  • Take-home assignment (2 hours max)
  • Peer interviews, including a 30 min code review discussion
  • Final call(s) with executive team member(s)  

Salary

The starting salary for this role is between $140,000 - $170,000, depending on skills and experience. We take a geo-neutral approach to compensation within the US, meaning that we pay based on job function and level, not location. For all other countries, we have competitive pay bands based on market standards.

Individual compensation decisions are based on a number of factors, including experience level, skillset, and balancing internal equity relative to peers at the company. We expect the majority of the candidates who are offered roles at our company to fall healthily throughout the range based on these factors. We recognize that the person we hire may be less experienced (or more senior) than this job description as posted. If that ends up being the case, the updated salary range will be communicated with you as a candidate.

Why Muck Rack?

Remote Work, Forever. We’re a fully distributed team and have pledged to remain that way forever. We offer employees a full home office setup, phone & internet reimbursement, and a monthly coworking membership. We build culture through virtual and in-person team bonding opportunities including team lunches, friendly competitions, and celebratory events!

Transparent Compensation. We offer competitive geo-neutral pay in the U.S. and review compensation at least once annually to ensure internal equity and alignment with the external market. Depending on the role, we offer either a standardized bonus program or attainable commission structure and an opportunity to earn equity in the company. All employees are eligible for our 401(k) plan* with employer contributions.

Health & Wellness*. Muck Rack provides comprehensive health, dental, vision, disability and life insurance for employees and their families. We offer a high-deductible health plan with 100% premium coverage for individuals, as well as a range of other plan options. Our team also has access to 24/7 Virtual Care, an Employee Assistance Program, employer-funded HSA contributions, and other pre-tax benefits. Team members have access to a quarterly wellness stipend and a free Headspace subscription.

PTO and Family Benefits.Our team enjoys 4+ weeks of off-the-grid PTO, plus paid sick/mental health days, summer Fridays, and 13 paid holidays. In order to combat Zoom fatigue and allow for deep work without interruption, we have implemented “No Internal Meeting Fridays” year round. We also provide up to 16 weeks of fully paid parental leave.

Personal & Professional Development. We grow talent by creating internal pathways for advancement and promotion. Muck Rack conducts bi-annual performance reviews, hosts team-wide workshops, and offers management training and leadership training opportunities. We also provide unlimited subscriptions to L&D platforms including Coursera & O’Reilly, as well as 2 additional days of PTO to dedicate to learning and development.

Culture of Inclusion.We know that diverse perspectives breed innovation and help us better serve our customers. We are committed to ensuring employees feel their identities are valued and that people of all backgrounds and points of view are treated equitably.

Customer-First. Founder-controlled means we have the freedom to be nimble, highly collaborative and innovative, building forward-thinking products that enable 3,000+ companies around the world to build trust, tell their stories and demonstrate the unique value of earned media.

*These benefits are specific to US-based employees. In some, but not all, cases we are able to offer equivalent benefits to employees located outside of the United States.

While we are a fully distributed team, we do have limitations on where we can hire and maintain a list of acceptable working locations based on job function. If we are unable to hire in your current location for the role for which you applied, you will be notified via email. While we enjoy many benefits as a permanently distributed and remote company, we cannot always support relocation or extended travel and have guidelines in place to ensure compliant work away from your designated permanent residence.

If you're excited about an opportunity at Muck Rack but your experience doesn't align perfectly with the requirements of the role outlined here, please don't let it stop you from applying. We're committed to building a diverse and inclusive workplace, and we want to hear from you. You may be a great fit for this role or another position on our team. We deliberately encourage individuals from all backgrounds, including race, gender identity, sexual orientation, and disability status to apply for positions. We are an equal opportunity employer and we're committed to a fair and consistent interview process and candidate experience.
 
#LI-Remote

See more jobs at Muck Rack

Apply for this job