Data Engineer Remote Jobs

104 Results

+30d

Senior Data Engineering

NielsenIQIllinois, IL, Remote
DevOPSagileDesignazurejenkinspythonAWS

NielsenIQ is hiring a Remote Senior Data Engineering

Job Description

Position Description 

  • Meet with stakeholders to understand the big picture and asks. 
  • Recommend architecture aligned with the goals and objectives of the product/organization. 
  • Recommend standard ETL design patterns and best practice. 
  • Drive the detail design and architectural discussions as well as customer requirements sessions to support the implementation of code and procedures for our big data product.
  • Design and develop proof of concept/prototype to demonstrate architecture feasibility. 
  • Collaborate with developers on the team to meet product deliverables. 
  • Must have familiarity with data science tech stack. Any one of the  languages :SAS,SPSS code or R-code. 
  • Work independently and collaboratively on a multi-disciplined project team in an Agile development environment. 
  • Ability to identify and solve for code/design optimization. 
  • Learn and integrate with a variety of systems, APIs, and platforms. 
  • Interact with a multi-disciplined team to clarify, analyze, and assess requirements. 
  • Be actively involved in the design, development, and testing activities in big data applications. 

Qualifications

  • Hands-on experience Python and Pyspark, Jupyter Notebooks, Python. 
  • Familiarity with Databricks. Azure Databricks is a plus. 
  • Familiarity with data cleansing, transformation, and validation. 
  • Proven architecture skills on Big Data  projects. 
  • Hands-on experience with a code versioning tool such as GitHub, Bitbucket, etc. 
  • Hands-on experience building pipelines in GitHub (or Azure Devops,Github, Jenkins, etc.) 
  • Hands-on experience with Spark. 
  • Strong written and verbal communication skills. 
  • Self-motivated and ability to work well in a team. 

Any mix of the following skills is also valuable: 

  • Experience with data visualization tools such as Power BI or Tableau. 
  • Experience with DEVOPS CI/CD tools and automation processes (e.g., Azure DevOPS, GitHub, BitBucket). 
  • Experience with Azure Cloud Services and Azure Data Factory. 
  • Azure or AWS Cloud certification preferred. 

Education:

  • Bachelor of Science degree from an accredited university 

See more jobs at NielsenIQ

Apply for this job

+30d

Senior Data Engineer

MozillaRemote
sqlDesignc++python

Mozilla is hiring a Remote Senior Data Engineer

To learn the Hiring Ranges for this position, please select your location from the Apply Now dropdown menu.

To learn more about our Hiring Range System, please click this link.

Why Mozilla?

Mozilla Corporation is the non-profit-backed technology company that has shaped the internet for the better over the last 25 years. We make pioneering brands like Firefox, the privacy-minded web browser, and Pocket, a service for keeping up with the best content online. Now, with more than 225 million people around the world using our products each month, we’re shaping the next 25 years of technology and helping to reclaim an internet built for people, not companies. Our work focuses on diverse areas including AI, social media, security and more. And we’re doing this while never losing our focus on our core mission – to make the internet better for people. 

The Mozilla Corporation is wholly owned by the non-profit 501(c) Mozilla Foundation. This means we aren’t beholden to any shareholders — only to our mission. Along with thousands of volunteer contributors and collaborators all over the world, Mozillians design, build and distributeopen-sourcesoftware that enables people to enjoy the internet on their terms. 

About this team and role:

As a Senior Data Engineer at Mozilla, your primary area of focus will be on our Analytics Engineering team. This team focuses on modeling our data so that the rest of Mozilla has access to it, in the appropriate format, when they need it, to help them make data informed decisions. This team is also tasked with helping to maintain and make improvements to our data platform. Some recent improvements include introducing a data catalog, building in data quality checks among others. Check out the Data@Mozilla blog for more details on some of our work.

What you’ll do: 

  • Work with data scientists to design data modes, answer questions and guide product decisions
  • Work with other data engineers to design and maintain scalable data models and ETL pipelines
  • Help improve the infrastructure for ingesting, storing and transforming data at a scale of tens of terabytes per day
  • Help design and build systems to monitor and analyze data from Mozilla’s products
  • Establish best practices for governing data containing sensitive information, ensuring compliance and security

What you’ll bring: 

  • At a minimum 3 years of professional experience in data engineering
  • Proficiency with the programming languages used by our teams (SQL and Python)
  • Demonstrated experience designing data models used to represent specific business activities to power analysis
  • Strong software engineering fundamentals: modularity, abstraction, data structures, and algorithms
  • Ability to work collaboratively with a distributed team, leveraging strong communication skills to ensure alignment and effective teamwork across different time zones
  • Our team requires skills in a variety of domains. You should have proficiency in one or more of the areas listed below, and be interested in learning about the others:
    • You have used data to answer specific questions and guide company decisions.
    • You are opinionated about data models and how they should be implemented; you partner with others to map out a business process, profile available data, design and build flexible data models for analysis.
    • You have experience recommending / implementing new data collection to help improve the quality of data models.
    • You have experience with data infrastructure: databases, message queues, batch and stream processing
    • You have experience building modular and reusable ETL/ELT pipelines in distributed databases
    • You have experience with highly scalable distributed systems hosted on cloud providers (e.g. Google Cloud Platform)
  • Commitment to our values:
    • Welcoming differences
    • Being relationship-minded
    • Practicing responsible participation
    • Having grit

What you’ll get:

  • Generous performance-based bonus plans to all regular employees - we share in our success as one team
  • Rich medical, dental, and vision coverage
  • Generous retirement contributions with 100% immediate vesting (regardless of whether you contribute)
  • Quarterly all-company wellness days where everyone takes a pause together
  • Country specific holidays plus a day off for your birthday
  • One-time home office stipend
  • Annual professional development budget
  • Quarterly well-being stipend
  • Considerable paid parental leave
  • Employee referral bonus program
  • Other benefits (life/AD&D, disability, EAP, etc. - varies by country)

About Mozilla 

Mozilla exists to build the Internet as a public resource accessible to all because we believe that open and free is better than closed and controlled. When you work at Mozilla, you give yourself a chance to make a difference in the lives of Web users everywhere. And you give us a chance to make a difference in your life every single day. Join us to work on the Web as the platform and help create more opportunity and innovation for everyone online.

Commitment to diversity, equity, inclusion, and belonging

Mozilla understands that valuing diverse creative practices and forms of knowledge are crucial to and enrich the company’s core mission.  We encourage applications from everyone, including members of all equity-seeking communities, such as (but certainly not limited to) women, racialized and Indigenous persons, persons with disabilities, persons of all sexual orientations,gender identities, and expressions.

We will ensure that qualified individuals with disabilities are provided reasonable accommodations to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment, as appropriate. Please contact us at hiringaccommodation@mozilla.com to request accommodation.

We are an equal opportunity employer. We do not discriminate on the basis of race (including hairstyle and texture), religion (including religious grooming and dress practices), gender, gender identity, gender expression, color, national origin, pregnancy, ancestry, domestic partner status, disability, sexual orientation, age, genetic predisposition, medical condition, marital status, citizenship status, military or veteran status, or any other basis covered by applicable laws.  Mozilla will not tolerate discrimination or harassment based on any of these characteristics or any other unlawful behavior, conduct, or purpose.

Group: D

#LI-DNI

Req ID: R2679

See more jobs at Mozilla

Apply for this job

+30d

Data Engineer

Zone ITSydney,New South Wales,Australia, Remote Hybrid

Zone IT is hiring a Remote Data Engineer

We are currently seeking a highly motivated and experienced Data Engineer to a full-time position. You will be responsible for designing and implementing data architectures, integrating data from various sources, and optimizing data pipelines to ensure efficient and accurate data processing.

Key responsibilities:

  • Design and implement data architectures, including databases and processing systems
  • Integrate data from various sources and ensure data quality and reliability
  • Optimize data pipelines for scalability and performance
  • Develop and maintain ETL processes and data transformation solutions
  • Apply data security measures and ensure compliance with data privacy regulations
  • Create and maintain documentation related to data systems design and maintenance
  • Collaborate with cross-functional teams to understand data requirements and provide effective data solutions

Key skills and qualifications:

  • Bachelor's degree or higher in Computer Science, Data Science, or a related field
  • Strong proficiency in SQL, Python, and/or Java
  • Experience with ETL processes and data integration
  • Working knowledge of data modeling and database design principles
  • Familiarity with big data technologies such as Hadoop, Spark, or Kafka is a plus
  • Experience with cloud platforms such as AWS, Azure, or GCP is a plus
  • Strong problem-solving and analytical skills
  • Excellent communication and collaboration abilities

About Us

Zone IT Solutions is Australia based Recruitment Company. We specialize in Digital, ERP and larger IT Services. We offer flexible, efficient and collaborative solutions to any organization that requires IT, experts. Our agile, agnostic and flexible solutions will help you source the IT Expertise you need. Our delivery Offices are in Melbourne, Sydney and India. If you are looking for new opportunities; please share your profile at Careers@zoneitsolutions.com or contact us at 0434189909

Also follow our LinkedIn page for new job opportunities and more.

Zone IT Solutions is an equal opportunity employer and our recruitment process focuses on essential skills and abilities. We welcome applicants from a diverse range of backgrounds, including Aboriginal and Torres Strait Islander peoples, people from culturally and linguistically diverse (CALD) backgrounds and people with disabilities.

See more jobs at Zone IT

Apply for this job

+30d

Sr. Data Engineer - Remote

Trace3Remote
DevOPSagilenosqlsqlDesignazuregraphqlapijavac++c#pythonbackend

Trace3 is hiring a Remote Sr. Data Engineer - Remote


Who is Trace3?

Trace3 is a leading Transformative IT Authority, providing unique technology solutions and consulting services to our clients. Equipped with elite engineering and dynamic innovation, we empower IT executives and their organizations to achieve competitive advantage through a process of Integrate, Automate, Innovate.

Our culture at Trace3 embodies the spirit of a startup with the advantage of a scalable business. Employees can grow their career and have fun while doing it!

Trace3 is headquartered in Irvine, California. We employ more than 1,200 people all over the United States. Our major field office locations include Denver, Indianapolis, Grand Rapids, Lexington, Los Angeles, Louisville, Texas, San Francisco.  

Ready to discover the possibilities that live in technology?

 

Come Join Us!

Street-Smart Thriving in Dynamic Times

We are flexible and resilient in a fast-changing environment. We continuously innovate and drive constructive change while keeping a focus on the “big picture.” We exercise sound business judgment in making high-quality decisions in a timely and cost-effective manner. We are highly creative and can dig deep within ourselves to find positive solutions to different problems.

Juice - The “Stuff” it takes to be a Needle Mover

We get things done and drive results. We lead without a title, empowering others through a can-do attitude. We look forward to the goal, mentally mapping out every checkpoint on the pathway to success, and visualizing what the final destination looks and feels like.

Teamwork - Humble, Hungry and Smart

We are humble individuals who understand how our job impacts the company's mission. We treat others with respect, admit mistakes, give credit where it’s due and demonstrate transparency. We “bring the weather” by exhibiting positive leadership and solution-focused thinking. We hug people in their trials, struggles, and failures – not just their success. We appreciate the individuality of the people around us.


 

Who We’re Looking For:

We’re looking to add a Senior Data Integration Engineer with a strong background in data engineering and development.  You will work with a team of software and data engineers to build client-facing data-first solutions utilizing data technologies such as SQL Server and MongoDB. You will develop data pipelines to transform/wrangle/integrate the data into different data zones.

To be successful in this role, you will need to hold extensive knowledge of SQL, relational databases, ETL pipelines, and big data fundamentals.  You will also need to possess strong experience in the development and consumption of RESTful APIs.  The ideal candidate will also be a strong independent worker and learner.

 

What You’ll Be Doing

  • Develop processes and data models for consuming large quantities of 3rd party vendor data via RESTful APIs.
  • Develop data processing pipelines to analyze, transform, and migrate data between applications and systems.
  • Analyze data from multiple sources and negotiate differences in storage schema using the ETL process.
  • Develop APIs for external consumption by partners and customers.
  • Develop and support our ETL environment by recommending improvements, monitoring, and deploying quality and validation processes to ensure accuracy and integrity of data.
  • Design, develop, test, deploy, maintain, and improve data integration pipelines.
  • Create technical solutions that solve business problems and are well engineered, operable, and maintainable.
  • Design and implement tools to detect data anomalies (observability). Ensure that data is accurate, complete, and high quality across all platforms.
  • Troubleshoot data issues and perform root cause analysis to proactively resolve product and operational issues.
  • Assembles large and complex data sets; develops data models based on specifications using structured data sets.
  • Develops familiarity with emerging and complex automations and technologies that support business processes.
  • Develops scalable and re-usable frameworks for ingestion and transformation of large datasets.
  • Work within an Agile delivery / DevOps methodology to deliver product increments in iterative sprints.
  • Working with stakeholders including the Executive, Product, Data and Design teams to support their data infrastructure needs while assisting with data-related technical issues.
  • Develop data models and mappings and build new data assets required by users. Perform exploratory data analysis on existing products and datasets.
  • Identifying, designing, and implementing internal process improvements including re-designing infrastructure for greater scalability, optimizing data delivery, and automating manual processes.
  • Engage in logical and physical design of databases, table creation, script creation, views, procedures, packages, and other database objects.
  • Create documentation for solutions and processes implemented or updated to ensure team members and stakeholders can correctly interpret it.
  • Design and implement processes and/or process improvements to help the development of technology solutions.

 

Your Skills and Experience (In Order of Importance):

  • 5+ years of relational database development experience; including SQL query generation and tuning, database design, and data concepts.
  • 5+ years of backend and Restful API development experience in NodeJS (experience with GraphQL a plus).
  • 5+ years of development experience with the following languages Python, Java, C#/ .NET.
  • 5+ years of experience with SQL and NoSQL databases; including MS SQL and MongoDB.
  • 5+ years consuming RESTful APIs with data ingestion and storage.
  • 5+ years developing RESTful APIs for use by customers and 3rd
  • 3+ years of professional work experience designing and implementing data pipelines in a cloud environment.
  • 3+ years of experience working within Azure cloud.
  • Experience in integrating and ingesting data from external data sources.
  • Strong diagnostic skills and ability to research, troubleshoot, and logically determine solutions.
  • Ability to effectively prioritize tasks in a fast-paced, high-volume, and evolving work environment.
  • Comfortable managing multiple and changing priorities, and meeting deadlines.
  • Highly organized, detail-oriented, excellent time management skills.
  • Excellent written and verbal communication skills.
Actual salary will be based on a variety of factors, including location, experience, skill set, performance, licensure and certification, and business needs. The range for this position in other geographic locations may differ. Certain positions may also be eligible for variable incentive compensation, such as bonuses or commissions, that is not included in the base salary.
Estimated Pay Range
$142,500$168,700 USD

The Perks:

  • Comprehensive medical, dental and vision plans for you and your dependents
  • 401(k) Retirement Plan with Employer Match, 529 College Savings Plan, Health Savings Account, Life Insurance, and Long-Term Disability
  • Competitive Compensation
  • Training and development programs
  • Stocked kitchen with snacks and beverages
  • Collaborative and cool culture
  • Work-life balance and generous paid time off

 

***To all recruitment agencies: Trace3 does not accept unsolicited agency resumes/CVs. Please do not forward resumes/CVs to our careers email addresses, Trace3 employees or any other company location. Trace3 is not responsible for any fees related to unsolicited resumes/CVs.

See more jobs at Trace3

Apply for this job

+30d

Process Engineer

Tessenderlo GroupPhoenix, AZ, Remote
Designapi

Tessenderlo Group is hiring a Remote Process Engineer

Job Description

Are you an experienced Chemical Engineer passionate about process optimization and hands-on work? Do you thrive in environments where you're given the autonomy to lead, innovate, and solve complex problems? If so, we have an exciting opportunity for you!

As a Process Engineer III with Tessenderlo Kerley, Inc., you will be pivotal in troubleshooting, designing, and implementing different processes at multiple sites. You will collaborate closely with plant operations, HS&E, and project teams to achieve company production, quality control, and compliance goals. In addition, you will work with the Process Engineering Manager and other engineers to learn company tools and standard practices. Tessenderlo Kerley has multiple facilities in the U.S. and abroad, offering countless opportunities for professional growth and development.

The ideal candidate for this role will have a sharp eye for detail, strong organizational skills and the ability to balance multiple projects. You’ll alsoneed a solid technical background in chemical plant operations, an interest in analyzing process data, and the drive to find practical solutions for engineering challenges.

Key Responsibilities:

  • Chemical engineering– Understanding piping and instrumentation diagrams, mass and energy balances, chemical compatibility, and product quality controls.
  • Process Safety Management – Participation or leadership of PHA/HAZOPs, assisting with change management.
  • Design– P&ID redlines, equipment/instrument specifications, and calculations (line sizing, PSV sizing per API codes, rotating equipment sizing).
  • Project Execution– Scope of work development, gathering and review of vendor bids, and collaboration with other engineering disciplines.
  • Field Work:Provide technical support for troubleshooting, turnarounds and project commissioning efforts at 2-4 sites, with approximately 30-40% travel.

    Qualifications

    What We’re Looking For:

    • A Bachelor of Science degree in Chemical Engineering.
    • At least five years of hands-on process engineering experience, ideally with some exposure to Sulfur Recovery Units.
    • Strong, independent decision-making skills to drive projects with minimal oversight.
    • Technical skills such as P&ID design, equipment/instrument sizing and selection, review of procedures and operating manuals.
    • A knack for balancing multiple projects and sites while maintaining safety and productivity standards.
    • A motivated, safety-conscious individual who inspires others through professionalism and effective communication.

    What we can offer you:

    • Independence: You will have the freedom to make impactful decisions and optimize processes with minimal supervision.
    • Continuous Learning: You will participate in seminars and gain exposure to various subjects, processes and cutting-edge technology.
    • Diverse Experiences: With both office and fieldwork, you'll collaborate with cross-functional teams, travel to multiple sites (domestic and minimal international), and tackle unique challenges.
    • Flexibility: Tessenderlo Kerley values professional growth and allows engineers to explore their interests related to company projects and assignments.
    • Safety First: You will join a company with an outstanding safety record and where your well-being is a top priority.

    Physical Requirements:

    • Ability to lift 50 pounds, climb stairs and use a variety of safety equipment, including respirators and SCBAs.

    If you’re a problem solver, project executor, and passionate about pushing the boundaries of process engineering, this is the role for you!

    Join our team and take your career to the next level by applying your skills to real-world challenges in a dynamic and rewarding environment.

    See more jobs at Tessenderlo Group

    Apply for this job

    +30d

    Senior Data Engineer

    BloomreachRemote CEE, Czechia, Slovakia
    redisremote-firstc++kubernetespython

    Bloomreach is hiring a Remote Senior Data Engineer

    Bloomreach is the world’s #1 Commerce Experience Cloud, empowering brands to deliver customer journeys so personalized, they feel like magic. It offers a suite of products that drive true personalization and digital commerce growth, including:

    • Discovery, offering AI-driven search and merchandising
    • Content, offering a headless CMS
    • Engagement, offering a leading CDP and marketing automation solutions

    Together, these solutions combine the power of unified customer and product data with the speed and scale of AI optimization, enabling revenue-driving digital commerce experiences that convert on any channel and every journey. Bloomreach serves over 850 global brands including Albertsons, Bosch, Puma, FC Bayern München, and Marks & Spencer. Bloomreach recently raised $175 million in a Series F funding round, bringing its total valuation to $2.2 billion. The investment was led by Goldman Sachs Asset Management with participation from Bain Capital Ventures and Sixth Street Growth. For more information, visit Bloomreach.com.

     

    We want you to join us as a full-timeSenior Data Engineer into our Data Pipelineteam. We work remotely first, but we are more than happy to meet you in our nice office in Bratislava or Brno. And if you are interested in who will be your engineering manager, check out Vaclav's Linkedin.

    Intrigued? Read on ????…

    Your responsibilities

    • You will develop and maintain Data LakeHouse on top of GCP platform using Apache IceBerg, BigQuery, BigLake tables, DataPlex and DataProc in the form ofApacheSpark/Flink with open file formats like AVRO and Parquet
    • You will help to maintain a streaming mechanism on how data from Apache Kafka gets into the Data LakeHouse
    • You will optimise the Data LakeHouse for near-real-time and non-real-time analytical use-cases primarily for customer activation and scenarios/campaign evaluation 
    • You should help with areas like data discovery and managed access to data through the data governance layer and data catalog using DataPlex so our engineering teams can leverage from this unified Data LakeHouse
    • You feel responsible for DataModeling and schema evolution
    • You should help us with adopting the concepts from Data Fabrics and Data Mesh to run data as a product to unlock the potential the data can unleash for our clients
    • You should bring expertise into the team from similar previous projects to influence how we adopt and evolve the concepts mentioned above and as an addition to that to topics like Zero-copy or reverse ETL to increase the ease of integration with client’s platforms
    • You will also help to maintain the existing data exports to Google’s BigQuery using google’s DataFlows and Apache Beam 
    • You will help us run and support our services in production handling high-volume traffic using Google Cloud Platform and Kubernetes.
    • You will review the code of your peers and they'll review yours. We have high code quality standards and the four-eyes principle is a must!

    Your qualifications

    • You have production experience with building and operating a DataLake, Data Warehouse or Data LakeHouses
    • You have a taste for big data streaming, storage and processing using open source technologies
    • You can demonstrate your understanding of what it means to treat data as a product
    • You know what are Data Mashes and Data Fabrics and what is critical to make sure for them to bring value
    • You are able to learn and adapt. It'll be handy while exploring new tech, navigating our not-so-small code base, or when iterating on our team processes.
    • You knowdata structures,you knowPython and (optionaly) Go.

    Our tech stack

    • Google Cloud Platform, DataFlow, Apache Beam, BigQuery, BigLake Table
    • Open formats IceBerg, Avro, Parquet
    • DataProc, Spark, Flink, Presto
    • Python, GO
    • Apache Kafka, Kubernetes, GitLab
    • BigTable, Mongo, Redis
    • … and much more ????

    Compensations

    • Salary range starting from 4300 EUR gross per month,going up depending on your experience and skills
    • There's a bonus based on company performance and your salary.
    • You will be entitled to restricted stock options ????that will truly make you a part of Bloomreach.
    • You can spend 1500 USD per year on the education of your choice (books, conferences, courses, ...).
    • You can count on free access to Udemy courses.
    • We have 4 company-wide disconnect days throughout the year during which you will be encouraged not to work and spend a day with your friends and family "disconnected".
    • You will have extra 5 days of paid vacation????. Extra days off for extra work-life balance ????.
    • Food allowance!
    • Sweet referral bonus up to 3000 USD based on the position.

    Your success story.

    • During the first 30 days, you will get to know the team, the company, and the most important processes. You’ll work on yourfirst tasks. We will help you to get familiar with our codebase and our product.
    • During the first 90 days, you will participate in yourfirst, more complex projects. You will help the team to find solutions to various problems, break the solution down into smaller tasks and participate in implementation. You will learn how we identify problems, how we prioritize our efforts, and how we deliver value to our customers.
    • During the first 180 days, you’ll become an integral part of the team. You will achieve the first goals we will set together to help you grow and explore new and interesting things. You will help us to deliver multi-milestone projects bringing great value to our customers. You will help us mitigate your first incidents and eventually even join the on-call rotation. You will get a sense of where the team is heading and you’ll help us toshape our future.
    • Finally, you’ll find out that our values are truly lived by us ????. We are dreamers and builders. Join us!

     

    More things you'll like about Bloomreach:

    Culture:

    • A great deal of freedom and trust. At Bloomreach we don’t clock in and out, and we have neither corporate rules nor long approval processes. This freedom goes hand in hand with responsibility. We are interested in results from day one. 

    • We have defined our5 valuesand the 10 underlying key behaviors that we strongly believe in. We can only succeed if everyone lives these behaviors day to day. We've embedded them in our processes like recruitment, onboarding, feedback, personal development, performance review and internal communication. 

    • We believe in flexible working hours to accommodate your working style.

    • We work remote-first with several Bloomreach Hubs available across three continents.

    • We organize company events to experience the global spirit of the company and get excited about what's ahead.

    • We encourage and support our employees to engage in volunteering activities - every Bloomreacher can take 5 paid days off to volunteer*.
    • TheBloomreach Glassdoor pageelaborates on our stellar 4.6/5 rating. The Bloomreach Comparably page Culture score is even higher at 4.9/5

    Personal Development:

    • We have a People Development Program -- participating in personal development workshops on various topics run by experts from inside the company. We are continuously developing & updating competency maps for select functions.

    • Our resident communication coachIvo Večeřais available to help navigate work-related communications & decision-making challenges.*
    • Our managers are strongly encouraged to participate in the Leader Development Program to develop in the areas we consider essential for any leader. The program includes regular comprehensive feedback, consultations with a coach and follow-up check-ins.

    • Bloomreachers utilize the $1,500 professional education budget on an annual basis to purchase education products (books, courses, certifications, etc.)*

    Well-being:

    • The Employee Assistance Program -- with counselors -- is available for non-work-related challenges.*

    • Subscription to Calm - sleep and meditation app.*

    • We organize ‘DisConnect’ days where Bloomreachers globally enjoy one additional day off each quarter, allowing us to unwind together and focus on activities away from the screen with our loved ones.

    • We facilitate sports, yoga, and meditation opportunities for each other.

    • Extended parental leave up to 26 calendar weeks for Primary Caregivers.*

    Compensation:

    • Restricted Stock Units or Stock Options are granted depending on a team member’s role, seniority, and location.*

    • Everyone gets to participate in the company's success through the company performance bonus.*

    • We offer an employee referral bonus of up to $3,000 paid out immediately after the new hire starts.

    • We reward & celebrate work anniversaries -- Bloomversaries!*

    (*Subject to employment type. Interns are exempt from marked benefits, usually for the first 6 months.)

    Excited? Join us and transform the future of commerce experiences!

    If this position doesn't suit you, but you know someone who might be a great fit, share it - we will be very grateful!


    Any unsolicited resumes/candidate profiles submitted through our website or to personal email accounts of employees of Bloomreach are considered property of Bloomreach and are not subject to payment of agency fees.

     #LI-Remote

    See more jobs at Bloomreach

    Apply for this job

    +30d

    Data Engineer II - (Remote - US)

    MediavineAtlanta,Georgia,United States, Remote
    sqlDesignpythonAWS

    Mediavine is hiring a Remote Data Engineer II - (Remote - US)

    Mediavine is seeking an experienced Data Engineer to join our engineering team. We are looking for someone who enjoys solving interesting problems and wants to work with a small team of talented engineers on a product used by thousands of publishers. Applicants must be based in the United States.

    About Mediavine

    Mediavine is a fast-growing advertising management company representing over 10,000 websites in the food, lifestyle, DIY, and entertainment space. Founded by content creators, for content creators, Mediavine is a Top 20 Comscore property, exclusively reaching over 125 million monthly unique visitors. With best-in-class technology and a commitment to traffic quality and brand safety, we ensure optimal performance for our creators.

    Mission & Culture

    We are striving to build an inclusive and diverse team of highly talented individuals that reflect the industries we serve and the world we live in. The unique experiences and perspectives of our team members is encouraged and valued. If you are talented, driven, enjoy the pace of a start-up like environment, let’s talk!

    Position Title & Overview:

    The Data & Analytics team consists of data analysts, data engineers and analytics engineers working to build the most effective platform and tools to help uncover opportunities and make decisions with data here at Mediavine. We partner with Product, Support, Ad Operations and other teams within the Engineering department to understand behavior, develop accurate predictors and build solutions that provide the best internal and external experience possible.

    A Data Engineer at Mediavine will help build and maintain our data infrastructure. Building scalable data pipelines, managing transformation processes, and ensuring data quality and security at all steps along the way. This will include writing and maintaining code in Python and SQL, developing on AWS, and selecting and using third-party tools like Rundeck, Metabase, and others to round out the environment. You will be involved in decisions around tool selection and coding standards.

     Our current data engineering toolkit consists of custom Python data pipelines, AWS infrastructure including Kinesis pipelines, Rundeck scheduling, dbt for transformation and Snowflake as our data warehouse platform. We are open to new tools and expect this position to be a part of deciding the direction we take. 

    Essential Responsibilities:

    • Create data pipelines that make data available for analytic and application use cases
    • Develop self-healing, resilient processes that do not require constant care and feeding to run smoothly
    • Create meaningful data quality notifications with clear actions for interested parties including other internal teams and other members of the data and analytics team
    • Leading projects from a technical standpoint, creating project Technical Design Documents
    •  Support data analysts and analytics engineers ability to meet the needs of the organization
    • Participate in code reviews, understanding coding standards, ensuring test coverage and being aware of best practices
    • Build or implement tooling around data quality, governance and lineage, in the dbt framework and Snowflake but external to that as needed
    • Provide next level support when data issues are discovered and communicated by the data analysts
    • Work with data analysts and analytics engineers to standardize transformation logic in the dbt layer for consistency and ease of exploration by end users
    • Enable analytics engineers and data analysts by providing data modeling guidance, query optimization and aggregation advice

    Location: 

    • Applicants must be based in the United States

    You Have: 

    • 3+ years of experience in a data engineering role
    • Strong Python skills (Understands tradeoffs, optimization, etc)
    • Strong SQL skills (CTEs, window functions, optimization)
    • Experience working in cloud environments (AWS preferred, GCS, Azure)
    • An understanding of how to best structure data to enable internal and external facing analytics
    • Familiarity with calling APIs to retrieve data (Authentication flows, filters, limits, pagination)
    • Experience working with DevOps to deploy, scale and monitor data infrastructure
    • Scheduler experience either traditional or DAG based
    • Comfortable working with multi-TB cloud data warehouses (Snowflake preferred, Redshift, Big Query)
    • Experience with other DBMS systems (Postgres in particular)

    Nice to haves:

    • Experience with web analysis such as creating data structure that support product funnels, user behavior, and decision path analysis 
    • Understanding of Snowflake external stages, file formats and snowpipe
    • Experience with orchestration tools particularly across different technologies and stacks
    • Experience with dbt
    • Knowledge of Ad Tech, Google Ad Manager and all of it’s fun quirks (so fun)
    • The ability to make your teammates laugh (it wouldn’t hurt if you were fun to work with is what I’m saying)
    • Familiarity with event tracking systems (NewRelic, Snowplow, etc)
    • Experience with one or more major BI tools (Domo, Looker, PowerBI, etc.)
    • 100% remote 
    • Comprehensive benefits including Health, Dental, Vision and 401k match
    • Generous paid time off 
    • Wellness and Home Office Perks 
    • Up to 12 weeks of paid Parental Leave 
    • Inclusive Family Forming Benefits 
    • Professional development opportunities 
    • Travel opportunities for teams, our annual All Hands retreat as well as industry events

    Mediavine provides equal employment opportunities to applicants and employees. All aspects of employment will be based on merit, competence, performance, and business needs. We do not discriminate on the basis of race, color, religion, marital status, age, national origin, ancestry, physical or mental disability, medical condition, pregnancy, genetic information, gender, sexual orientation, gender identity or expression, veteran status, or any other status protected under federal, state, or local law.

    We strongly encourage minorities and individuals from underrepresented groups in technology to apply for this position.

    At Mediavine, base salary is one part of our competitive total compensation and benefits package and is determined using a salary range.  Individual compensation varies based on job-related factors, including business needs, experience, level of responsibility and qualifications. The base salary range for this role at the time of posting is $115,000 - $130,000 USD/yr.

    See more jobs at Mediavine

    Apply for this job

    +30d

    Data and Analytics Engineer

    airflowsqlDesignpython

    Cloudflare is hiring a Remote Data and Analytics Engineer

    About Us

    At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world’s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies. Cloudflare protects and accelerates any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks. Cloudflare was named to Entrepreneur Magazine’s Top Company Cultures list and ranked among the World’s Most Innovative Companies by Fast Company. 

    We realize people do not fit into neat boxes. We are looking for curious and empathetic individuals who are committed to developing themselves and learning new skills, and we are ready to help you do that. We cannot complete our mission without building a diverse and inclusive team. We hire the best people based on an evaluation of their potential and support them throughout their time at Cloudflare. Come join us! 

    Available Locations: Lisbon, Portugal

    About the team

    You will be part of the Network Strategy team within Cloudflare’s Infrastructure Engineering department. The Network Strategy team focuses on building both external and internal relationships that allow for Cloudflare to scale and reach user populations around the world. Our group takes a long term and technical approach to forging mutually beneficial and sustainable relationships with all of our network partners. 

    About the role

    We are looking for an experienced Data and Analytics Engineer to join our team to scale our data insights initiatives. You will work with a wide array of data sources about network traffic, performance, and cost. You’ll be responsible for building data pipelines, doing ad-hoc analytics based on the data, and automating our analysis. Important projects include understanding the resource consumption and cost of Cloudflare’s broad product portfolio.

    A candidate will be successful in this role if they’re flexible and able to match the right solution to the right problem. Flexibility is key. Cloudflare is a fast-paced environment and requirements change frequently.

    What you'll do

    • Design and implement data pipelines that take unprocessed data and make it usable for advanced analytics
    • Work closely with other product and engineering teams to ensure our products and services collect the right data for our analytics
    • Work closely with a cross functional team of data scientists and analysts and internal stakeholders on strategic initiatives 
    • Build tooling, automation, and visualizations around our analytics for consumption by other Cloudflare teams

    Examples of desirable skills, knowledge and experience

    • Excellent Python and SQL (one of the interviews will be a code review)
    • B.S. or M.S in Computer Science, Statistics, Mathematics, or other quantitative fields, or equivalent experience
    • Minimum 3 years of industry experience in software engineering, data engineering, data science or related field with a track record of extracting, transforming and loading large datasets 
    • Knowledge of data management fundamentals and data storage/computing principles
    • Excellent communication & problem solving skills 
    • Ability to collaborate with cross functional teams and work through ambiguous business requirements

    Bonus Points

    • Familiarity with Airflow 
    • Familiarity with Google Cloud Platform or other analytics databases

    What Makes Cloudflare Special?

    We’re not just a highly ambitious, large-scale technology company. We’re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.

    Project Galileo: We equip politically and artistically important organizations and journalists with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost.

    Athenian Project: We created Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration.

    1.1.1.1: We released 1.1.1.1to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released. Here’s the deal - we don’t store client IP addresses never, ever. We will continue to abide by our privacy commitmentand ensure that no user data is sold to advertisers or used to target consumers.

    Sound like something you’d like to be a part of? We’d love to hear from you!

    This position may require access to information protected under U.S. export control laws, including the U.S. Export Administration Regulations. Please note that any offer of employment may be conditioned on your authorization to receive software or technology controlled under these U.S. export laws without sponsorship for an export license.

    Cloudflare is proud to be an equal opportunity employer.  We are committed to providing equal employment opportunity for all people and place great value in both diversity and inclusiveness.  All qualified applicants will be considered for employment without regard to their, or any other person's, perceived or actual race, color, religion, sex, gender, gender identity, gender expression, sexual orientation, national origin, ancestry, citizenship, age, physical or mental disability, medical condition, family care status, or any other basis protected by law.We are an AA/Veterans/Disabled Employer.

    Cloudflare provides reasonable accommodations to qualified individuals with disabilities.  Please tell us if you require a reasonable accommodation to apply for a job. Examples of reasonable accommodations include, but are not limited to, changing the application process, providing documents in an alternate format, using a sign language interpreter, or using specialized equipment.  If you require a reasonable accommodation to apply for a job, please contact us via e-mail athr@cloudflare.comor via mail at 101 Townsend St. San Francisco, CA 94107.

    See more jobs at Cloudflare

    Apply for this job

    +30d

    Azure Data Engineer

    ProArchHyderabad,Telangana,India, Remote
    Designazure

    ProArch is hiring a Remote Azure Data Engineer

    ProArch is hiring a skilled Azure Data Engineer to join our team. As an Azure Data Engineer, you will be responsible for designing, implementing, and maintaining data processing systems using Azure technologies. Additionally, you will collaborate with cross-functional teams to understand business requirements, identify opportunities for data-driven improvements, and deliver high-quality solutions. If you have a strong background in Azure data tools and technologies, excellent problem-solving skills, and a passion for data engineering, we want to hear from you!

    Responsibilities:

    • Design, develop, and implement data engineering solutions on the Azure platform
    • Create and maintain data pipelines and ETL processes
    • Optimize data storage and retrieval for performance and scalability
    • Collaborate with data scientists and analysts to build data models and enable data-driven insights
    • Ensure data quality and integrity through data validation and cleansing
    • Monitor and troubleshoot data pipelines and resolve any issues
    • Stay up-to-date with the latest Azure data engineering best practices and technologies
    • Excellent communication skills
    • Strong experience in Python/Pyspark
    • The ability to understand businesses concepts and work with customers to process data accurately.
    • A solid of understanding Azure Data Lake, Spark for Synapse (or Azure Databricks), Synapse Pipelines (or Azure Data Factory), Mapping Data Flows, SQL Server, Synapse Serverless/Pools (or SQL Data Warehouse).
    • Experience with source control, version control and moving data artifacts from Dev to Test to Prod.
    • A proactive self-starter, who likes deliver value, solves challenges and make progress.
    • Comfortable working in a team or as an individual contributor
    • Good data modelling skills (e.g., relationships, entities, facts, and dimensions)

    See more jobs at ProArch

    Apply for this job

    +30d

    Senior Data Engineer

    CLEAR - CorporateNew York, New York, United States (Hybrid)
    tableauairflowsqlDesignjenkinspythonAWS

    CLEAR - Corporate is hiring a Remote Senior Data Engineer

    Today, CLEAR is well-known as a leader in digital and biometric identification, reducing friction for our members wherever an ID check is needed. We’re looking for an experienced Senior Data Engineer to help us build the next generation of products which will go beyond just ID and enable our members to leverage the power of a networked digital identity. As a Senior Data Engineer at CLEAR, you will participate in the design, implementation, testing, and deployment of applications to build and enhance our platform- one that interconnects dozens of attributes and qualifications while keeping member privacy and security at the core. 


    A brief highlight of our tech stack:

    • SQL / Python / Looker / Snowflake / Airflow / Databricks / Spark / dbt

    What you'll do:

    • Build a scalable data system in which Analysts and Engineers can self-service changes in an automated, tested, secure, and high-quality manner 
    • Build processes supporting data transformation, data structures, metadata, dependency and workload management
    • Develop and maintain data pipelines to collect, clean, and transform data. Owning end to end data product from ingestion to visualization
    • Develop and implement data analytics models
    • Partner with product and other stakeholders to uncover requirements, to innovate, and to solve complex problems
    • Have a strong sense of ownership, responsible for architectural decision-making and striving for continuous improvement in technology and processes at CLEAR

     What you're great at:

    • 6+ years of data engineering experience
    • Working with cloud-based application development, and be fluent in at least a few of: 
      • Cloud services providers like AWS
      • Data pipeline orchestration tools like Airflow, Dagster, Luigi, etc
      • Big data tools like Spark, Kafka, Snowflake, Databricks, etc
      • Collaboration, integration, and deployment tools like Github, Argo, and Jenkins 
      • Data visualization tool like Looker, Tableau, etc
    • Articulating technical concepts to a mixed audience of technical and non-technical stakeholders
    • Collaborating and mentoring less experienced members of the team
    • Comfort with ambiguity 
    • Curiosity about technology, believe in constant learning, and ability to be autonomous to figure out what's important

    How You'll be Rewarded:

    At CLEAR we help YOU move forward - because when you’re at your best, we’re at our best. You’ll work with talented team members who are motivated by our mission of making experiences safer and easier. Our hybrid work environment provides flexibility. In our offices, you’ll enjoy benefits like meals and snacks. We invest in your well-being and learning & development with our stipend and reimbursement programs. 

    We offer holistic total rewards, including comprehensive healthcare plans, family building benefits (fertility and adoption/surrogacy support), flexible time off, free OneMedical memberships for you and your dependents, and a 401(k) retirement plan with employer match. The base salary range for this role is $175,000 - $215,000, depending on levels of skills and experience.

    The base salary range represents the low and high end of CLEAR’s salary range for this position. Salaries will vary depending on various factors which include, but are not limited to location, education, skills, experience and performance. The range listed is just one component of CLEAR’s total compensation package for employees and other rewards may include annual bonuses, commission, Restricted Stock Units.

    About CLEAR

    Have you ever had that green-light feeling? When you hit every green light and the day just feels like magic. CLEAR's mission is to create frictionless experiences where every day has that feeling. With more than 25+ million passionate members and hundreds of partners around the world, CLEAR’s identity platform is transforming the way people live, work, and travel. Whether it’s at the airport, stadium, or right on your phone, CLEAR connects you to the things that make you, you - unlocking easier, more secure, and more seamless experiences - making them all feel like magic.

    CLEAR provides reasonable accommodation to qualified individuals with disabilities or protected needs. Please let us know if you require a reasonable accommodation to apply for a job or perform your job. Examples of reasonable accommodation include, but are not limited to, time off, extra breaks, making a change to the application process or work procedures, policy exceptions, providing documents in an alternative format, live captioning or using a sign language interpreter, or using specialized equipment.

    See more jobs at CLEAR - Corporate

    Apply for this job

    +30d

    Snowflake Data Engineer

    OnebridgeIndianapolis, IN - Remote - Hybrid
    sqlDesigngit

    Onebridge is hiring a Remote Snowflake Data Engineer

    Onebridge is a Consulting firm with an HQ in Indianapolis, and clients dispersed throughout the United States and beyond. We have an exciting opportunity for a highly skilled Snowflake Data Engineer to join an innovative and dynamic group of professionals at a company rated among the top “Best Places to Work” in Indianapolis since 2015.

    Snowflake Data Engineer | About You

    As a Snowflake Data Engineer, you are responsible for defining data requirements, developing technical specifications, and architecting scalable and efficient data pipelines. You have a strong background in cloud-based data platforms and services, along with proven leadership skills to manage a team of data engineers. You will optimize ETL architectures and ensure adherence to best practices, security, and coding guidelines. You will also work closely with cross-functional teams, offering strategic insights and reporting project status, risks, and issues.

    Snowflake Data Engineer | Day-to-Day

    • Lead a team of data engineers in the design, development, and implementation of cloud-based data solutions using Snowflake, Fivetran, and Azure services.
    • Collaborate with cross-functional teams to define data requirements, develop technical specifications, and architect scalable, efficient data pipelines.
    • Design and implement data models, ETL processes, and data integration solutions to support business objectives and ensure data quality and integrity.
    • Optimize data architecture for performance, scalability, and cost-effectiveness, leveraging cloud-native technologies and best practices.
    • Provide technical leadership and mentorship, guiding team members in the adoption of best practices, tools, and methodologies.

    Snowflake Data Engineer| Skills & Experience

    • 8+ years of experience as a Data Engineer with a focus on cloud-based data platforms and services such as AWS, Azure, or GCP.
    • Extensive hands-on experience designing and implementing data solutions using Snowflake, Fivetran, and Azure cloud environments.
    • Strong proficiency in SQL and Python, with advanced knowledge of data modelling techniques, dimensional modelling, and data warehousing concepts.
    • In-depth understanding of data governance, security, and compliance frameworks, with experience in implementing security controls and encryption in cloud environments.
    • Excellent leadership and communication skills, with the ability to lead cross-functional teams, communicate technical strategies, and achieve goals in a fast-paced environment.

      A Best Place to Work in Indiana, since 2015.

      See more jobs at Onebridge

      Apply for this job

      +30d

      Principal Data Engineer

      Transcarent APIUS - Remote
      MLSalesEC2Bachelor's degreescalasqlDesignapijavac++pythonAWS

      Transcarent API is hiring a Remote Principal Data Engineer

      Who we are  

      Transcarentis the One Place for Health and Care. We cut through complexity, making it easy for people to access high-quality, affordable health and care. We create a personalized experience tailored for each Member, including an on-demand care team, and a connected ecosystem of high-quality, in-person care and virtual point solutions.Transcarent eliminatesthe guesswork and empowers Members to make better decisions about their health and care.

      Transcarentis aligned with those who pay for healthcare and takes accountability for results – offering at-risk pricing models and transparent impact reporting toensure incentives support a measurably better experience, better health, and lower costs. 

      AtTranscarent, you will be part of a world-class team, supported by top tier investors like 7wireVentures and General Catalyst, and founded by a mission-driven team committed to transforming the health and care experience for all. In May 2024, we closed our Series D with $126 million, propelling our total funding to $450 million and fueling accelerated AI capabilities and strategic growthopportunities. 

      We are looking for teammates to join us in building our company, culture, and Member experience who:  

      • Put people first, and make decisions with the Member’s best interests in mind 
      • Are active learners, constantly looking to improve and grow 
      • Are driven by our mission to measurably improve health and care each day 
      • Bring the energy needed to transform health and care, and move and adapt rapidly 
      • Are laser focused on delivering results for Members, and proactively problem solving to get there 

      What you’ll do  

      • Lead the Design and Implementation: Using modern data architecture principles, architect and implement cutting-edge data processing platforms and enterprise-wide data solutions. 
      • Scale Data Platform: Develop a scalable Platform for optimal data extraction, transformation, and loading from various sources, ensuring data integrity and accessibility. 
      • AI / ML platform: Design and build scalable AI and ML platforms to support Transcarent use cases.  
      • Collaborate Across Teams: Partner with Executive, Product, Clinical, Data, and Design teams to meet their data infrastructure needs, supporting them with technical expertise. 
      • Optimize Data Pipelines: Build and optimize complex data pipelines, ensuring high performance, reliability, and scalability. 
      • Innovate and Automate: Create and maintain data tools and pipelines that empower analytics, data science, and other teams to drive innovation and operational excellence. 
      • Mentor and Lead: Provide technical leadership and mentorship to the data engineering team, fostering a culture of continuous learning and improvement. 

      What we’re looking for:  

      • Experienced: 10+ years of experience in data engineering with a strong background in building and scaling data architectures in complex environments. Healthcare experience is a plus. 
      • Technical Expertise: Advanced working knowledge of SQL, relational databases, and big data tools (e.g., Spark, Kafka). Proficient in cloud-based data warehousing (e.g., Snowflake) and cloud services (e.g., AWS). Proficient in understanding of AI / ML workflows, etc., 
      • Architectural Visionary: Demonstrated experience in service-oriented and event-based architecture with strong API development skills. 
      • Problem Solver: Ability to manage and optimize processes supporting data transformation, metadata management, and workload management. 
      • Collaborative Leader: Strong communication skills with the ability to present ideas clearly and lead cross-functional teams effectively. 
      • Project Management: Strong project management and organizational skills, capable of leading multiple projects simultaneously. 

      Nice to have: 

      • Preferred tech stack: 
        • Data Platforms: Experience with building Data-as-a-Service platforms and API development. 
        • Programming Languages: Proficient in object-oriented scripting languages such as Python, Java, C++, Scala, etc. 
        • Big Data Tools: Expertise with tools like Spark, Kafka, and stream-processing systems like Storm or Spark-Streaming. 
        • Cloud Services: In-depth experience with AWS services (EC2, EMR, RDS, Redshift) and workflow management tools like Airflow. 
      As a remote position, the salary range for this role is:
      $210,000$220,000 USD

      Total Rewards 

      Individual compensation packages are based on a few different factors unique to each candidate, including primary work location and an evaluation of a candidate’s skills, experience, market demands, and internal equity.  

      Salary is just one component of Transcarent's total package. All regular employees are also eligible for the corporate bonus program or a sales incentive (target included in OTE) as well as stock options.  

      Our benefits and perks programs include, but are not limited to:  

      • Competitive medical, dental, and vision coverage  
      • Competitive 401(k) Plan with a generous company match  
      • Flexible Time Off/Paid Time Off, 12 paid holidays  
      • Protection Plans including Life Insurance, Disability Insurance, and Supplemental Insurance 
      • Mental Health and Wellness benefits  

      Location  

      You must be authorized to work in the United States. Depending on the position we may have a preference to a specific location, but are generally open to remote work anywhere in the US.  

      Transcarent is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. If you are a person with a disability and require assistance during the application process, please don’t hesitate to reach out!  

      Research shows that candidates from underrepresented backgrounds often don’t apply unless they meet 100% of the job criteria. While we have worked to consolidate the minimum qualifications for each role, we aren’t looking for someone who checks each box on a page; we’re looking for active learners and people who care about disrupting the current health and care with their unique experiences. 

       

      Apply for this job

      +30d

      Data Engineer II

      Life36Remote, USA
      agile3 years of experienceremote-firstterraformsqlDesignmobilec++pythonAWS

      Life36 is hiring a Remote Data Engineer II

      About Life360

      Life360’s mission is to keep people close to the ones they love. Our category-leading mobile app and Tile tracking devices empower members to protect the people, pets, and things they care about most with a range of services, including location sharing, safe driver reports, and crash detection with emergency dispatch. Life360 serves approximately 66 million monthly active users (MAU) across more than 150 countries. 

      Life360 delivers peace of mind and enhances everyday family life with seamless coordination for all the moments that matter, big and small. By continuing to innovate and deliver for our customers, we have become a household name and the must-have mobile-based membership for families (and those friends that basically are family). 

      Life360 has more than 500 (and growing!) remote-first employees. For more information, please visit life360.com.

      Life360 is a Remote First company, which means a remote work environment will be the primary experience for all employees. All positions, unless otherwise specified, can be performed remotely (within the US) regardless of any specified location above. 

      About the Team

      The Data and Analytics team is looking for a high intensity individual who is passionate about driving our in-app ads experience forward. Our mission is to envision, design, and build high impact ads data products while maintaining a commitment to putting our members before metrics. You will be expected to become the go-to member in our team for ads related data products. We want open-minded individuals that are highly collaborative and communicative. We work together and celebrate our wins as a team and are committed to building a welcoming team where everyone can perform their best.

      About the Job

      At Life360, we collect a lot of data: tens of billions of unique location points, billions of user actions, billions of miles driven every single month, and so much more. As a Data Engineer II, you will contribute to enhancing and maintaining our data processing and storage pipelines/workflows for a robust and secure data lake. You should have a strong engineering background and even more importantly a desire to take ownership of our data systems to make them world class.

      The US-based salary range for this role is $135,000 - $185,000. We take into consideration an individual's background and experience in determining final salary - therefore, base pay offered may vary considerably depending on geographic location, job-related knowledge, skills, and experience. The compensation package includes a wide range of medical, dental, vision, financial, and other benefits, as well as equity.

      What You’ll Do

      Primary responsibilities include, but are not limited to:

      • Design, implement, and maintain scalable data processing platforms used for real-time analytics and exploratory data analysis.
      • Manage our ads data from ingestion through ETL to storage and batch processing.
      • Automate, test and harden all data workflows.
      • Architect logical and physical data models to ensure the needs of the business are met.
      • Collaborate with our ads analytics teams, while applying best practices.
      • Architect and develop systems and algorithms for distributed real-time analytics and data processing.
      • Implement strategies for acquiring and transforming our data to develop new insights.
      • Champion data engineering best practices and institutionalizing efficient processes to foster growth and innovation within the team.

      What We’re Looking For

      • Minimum of 3 years of experience working with high volume data infrastructure.
      • Experience with cloud computing platforms like Databricks and AWS, or a cloud computing framework like dbt.
      • Experience with job orchestration tooling like Airflow.
      • Proficient programming in either Python or Java.
      • Proficiency with SQL and ability to optimize queries.
      • Experience in data modeling and database design.
      • Experience with large-scale data processing using Spark and/or Presto/Trino.
      • Experience working with an ads-related data system like Google Ad Manager, Adbutler, Kevel, Acxiom, Fantix, LiveRamp, etc.
      • Experience in modern development lifecycle including Agile methodology, CI/CD, automated deployments using Terraform, GitHub Actions etc.
      • Knowledge and proficiency in the latest open source and data frameworks, modern data platform tech stacks and tools.
      • Always learning and staying up to speed with the fast moving data world.
      • You have good communication skills and can work independently.
      • BS in Computer Science, Software Engineering, Mathematics, or equivalent experience.

      Our Benefits

      • Competitive pay and benefits
      • Medical, dental, vision, life and disability insurance plans (100% paid for employees)
      • 401(k) plan with company matching program
      • Mental Wellness Program & Employee Assistance Program (EAP) for mental well being
      • Flexible PTO, 13 company wide days off throughout the year
      • Winter and Summer Week-long Synchronized Company Shutdowns
      • Learning & Development programs
      • Equipment, tools, and reimbursement support for a productive remote environment
      • Free Life360 Platinum Membership for your preferred circle
      • Free Tile Products

      Life360 Values

      Our company’s mission driven culture is guided by our shared values to create a trusted work environment where you can bring your authentic self to work and make a positive difference.

      • Be a Good Person - We have a team of high integrity people you can trust. 
      • Be Direct With Respect - We communicate directly, even when it’s hard.
      • Members Before Metrics - We focus on building an exceptional experience for families. 
      • High Intensity, High Impact - We do whatever it takes to get the job done. 

      Our Commitment to Diversity

      We believe that different ideas, perspectives and backgrounds create a stronger and more creative work environment that delivers better results. Together, we continue to build an inclusive culture that encourages, supports, and celebrates the diverse voices of our employees. It fuels our innovation and connects us closer to our customers and the communities we serve. We strive to create a workplace that reflects the communities we serve and where everyone feels empowered to bring their authentic best selves to work.

      We are an equal opportunity employer and value diversity at Life360. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, disability status or any legally protected status.

      We encourage people of all backgrounds to apply. We believe that a diversity of perspectives and experiences create a foundation for the best ideas. Come join us in building something meaningful.Even if you don’t meet 100% of the below qualifications, you should still seriously consider applying!

      #LI-Remote

      ____________________________________________________________________________

      See more jobs at Life36

      Apply for this job

      +30d

      Senior Data Engineer

      StyleSeat100% Remote (U.S. Based Only, Select States)
      scalanosqlairflowsqlDesignc++dockerMySQLpython

      StyleSeat is hiring a Remote Senior Data Engineer

      Senior Data Engineer

      100% Remote (U.S. Based Only, Select States - See Below)

      About the role

      StyleSeat is looking to add a Senior Data Engineer to its cross-functional Search product team. This team of data scientists, analysts, data engineers, software engineers and SDETs is focused on improving our search capability and customer search experience. The Senior Data Engineer will use frameworks and tools to perform the ETL and propose abstractions of those methods to aid in solving the problems associated with data ingestion. 

      What you’ll do

      • Handle data engineering tasks in a team focused on improving search functionality and customer search experience.
      • Design, develop, and own ETL pipelines that deliver data with measurable quality.
      • Scope, architect, build, release, and maintain data oriented projects, considering performance, stability, and an error-free operation.
      • Identify and resolve pipeline issues while discovering opportunities for improvement.
      • Architect scalable and reliable solutions to move data across systems from multiple products in nearly real-time.
      • Continuously improve our data platform and keep the technology stack current.
      • Solve critical issues in complex designs or coding schemes.
      • Monitor metrics, analyze data, and partner with other internal teams to solve difficult problems creating a better customer experience.

      Who you are 

      Successful candidates can come from a variety of backgrounds, yet here are some of the must have  and nice to have experiences we’re looking for:

      Must-Have:

      • Expert SQL skills.
      • 4 + years experience with::
        • Scaling and optimizing schemas.
        • Performance tuning ETL pipelines.
        • Building pipelines for processing large amounts of data.
      • Proficiency with Python, Scala and other scripting languages.
      • Experience with:
        • MySQL and Redshift.
        • NoSQL data stores, methods and approaches.
        • Kinesis or other data streaming services. 
        • Airflow or other pipeline workflow management tools. 
        • EMR,  Spark and ElasticSearch.
        • Docker or other container management tools. 
        • Developing infrastructure as code (IAC).
      • Ability to effectively work and communicate with cross-departmental partners and non-technical teams.

      Nice to Have:

      • Experience with:
        • Segment customer data platform with integration to Braze.
        • Terraform. 
        • Tableau.
        • Django.
        • Flask.

      Salary Range

      Our job titles may span more than one career level. The career level we are targeting for this role has a base pay between $136,900 and $184,600. The actual base pay is dependent upon many factors, such as: training, transferable skills, work experience, business needs and market demands. Base pay ranges are subject to change and may be modified in the future. 

      Who we are

      StyleSeat is the premier business platform for SMBs in the beauty and wellness industry to run and grow their business; and destination for consumers to discover, book and pay. To date, StyleSeat has powered more than 200 million appointments totaling over $12 billion in revenue for small businesses. StyleSeat is a platform and marketplace designed to support and promote the beauty and personal care community. 

      Today, StyleSeat connects consumers with top-rated beauty professionals in their area for a variety of services, including hair styling, barbering, massage, waxing, and nail care, among others. Our platform ensures that Pros maximize their schedules and earnings by minimizing gaps and cancellations, effectively attracting and retaining clientele.

      StyleSeat Culture & Values 

      At StyleSeat, our team is committed to fostering a positive and inclusive work environment. We respect and value the unique perspectives, experiences, and skills of our team members and work to create opportunities for all to grow and succeed. 

      • Diversity - We celebrate and welcome diversity in backgrounds, experiences, and perspectives. We believe in the importance of creating an inclusive work environment where everyone can thrive. 
      • Curiosity- We are committed to fostering a culture of learning and growth. We ask questions, challenge assumptions, and explore new ideas. 
      • Community - We are committed to making a positive impact on each, even when win-win-win scenarios are not always clear or possible in every decision. We strive to find solutions that benefit the community as a whole and drive our shared success.
      • Transparency - We are committed to open, honest, and clear communication. We hold ourselves accountable for maintaining the trust of our customers and team.
      • Entrepreneurship - We are self-driven big-picture thinkers - we move fast and pivot when necessary to achieve our goals. 

      Applicant Note: 

      StyleSeat is a fully remote, distributed workforce, however, we only have business entities established in the below list of states and, thus, are unable to consider candidates who live in states not on this list for the time being.
      **Applicants must be authorized to work for ANY employer in the U.S. We are unable to sponsor or take over sponsorship of an employment visa at this time.

       

      * Arizona

      * Alabama

      * California

      * Colorado

      * Florida

      * Georgia

      * Illinois

      * Indiana

      * Massachusetts

      * Maryland

      * Michigan

      * Nebraska

      * New York

      * New Jersey 

      * Ohio

      * Oregon

      * Pennsylvania

      * Virginia 

      * Washington

       

      See more jobs at StyleSeat

      Apply for this job

      +30d

      Data Engineer

      Creative CFO (Pty) LtdCape Town, ZA - Remote
      LambdanosqlpostgressqlDesignazureapiAWS

      Creative CFO (Pty) Ltd is hiring a Remote Data Engineer

      Become part of a vibrant, quality-focused team that leverages trust and autonomy to deliver exceptional services to diverse, high-growth clients. Receive recognition for your committed, results-producing approach to problem-solving, and opportunities for learning to realise your own passion for personal growth. All while working with some of the country’s most exciting growing businesses - from local entertainers, gin distilleries and ice-cream parlours, to enterprises revolutionising traditional spaces like retail, property and advertising or treading on the cutting edge of fintech.

      About the company

      At Creative CFO (Pty) Ltd, we provide companies with the best financial tools and services to plan, structure, invest and grow. We believe that innovative solutions are an interconnected web of small problems solved brilliantly. By walking through all the financial processes for each company and solving problems along the way, we have developed a full-service solution that business owners really appreciate.

      We are committed to solving the challenges that small business owners face. Accounting and tax is one part of this, but we also cover business process analysis, financial systems implementation and investment support. We unlock value by creating a platform through which business owners can manage and focus their creativity, energy and financial resources.

      Position Summary

      As a Data Engineer at Creative CFO, you will be at the forefront of architecting, building, and maintaining our data infrastructure, supporting data-driven decision-making processes.

      We are dedicated to optimising data flow and collection to enhance our financial clarity services for high-growth businesses. Join our dynamic and rapidly expanding team, committed to building a world where more SMEs thrive.

      The Ideal Candidate

      To be successful in the role you will need to:

      Build and optimise data systems:

        • Design, construct, install, test, and maintain highly scalable data management systems.
        • Ensure systems meet business requirements and industry practices.

        Expertly process data:

        • Develop batch processing and real-time data streaming capabilities.
        • Read, extract, transform, stage, and load data to selected tools and frameworks as required.

        Build data Integrations

        • Work closely with data analysts, data scientists, and other stakeholders to assist with data-related technical issues and support their data infrastructure needs.
        • Collaborate with data analytics and BI teams to improve data models that feed business intelligence tools, increasing data accessibility and fostering data-driven decision-making across the organisation.

        Be versatile with technology

        • Exhibit proficiency in ETL tools such as Apache NiFi or Talend and a deep understanding of SQL and NoSQL databases, including Postgres and Cassandra.
          Demonstrate expertise in at least one cloud services platform like Microsoft Azure, Google Cloud Platform/Engine, or AWS.

        Focus on quality assurance

        • Implement systems for monitoring data quality, ensuring production data is always accurate and available for key stakeholders and business processes that depend on it.

        Have a growth mindset

        • Stay informed of the latest developments in data engineering, and adopt best practices to continuously improve the robustness and performance of data processing and storage systems.

        Requirements

        Key skills & qualifications:

        • Bachelor’s degree in Statistics, Data Science, Computer Science, Information Technology, or Engineering.
        • Minimum of 2 years of professional experience in a Data Engineering role, with a proven track record of successful data infrastructure projects.
        • Proficiency in data analysis tools and languages such as SQL, R, and Python.
        • Strong knowledge of data modeling, data access, and data storage techniques.
        • Proficiency in at least one of Microsoft Azure, Google Cloud Platform/Engine, or AWS Lambda environments.
        • Familiarity with data visualisation tools such as PowerBI, Pyramid, and Google Looker Studio.
        • Excellent analytical and problem-solving skills.
        • Effective communication skills to convey technical findings to non-technical stakeholders.
        • Eagerness to learn and adapt to new technologies and methodologies

        Relevant experience required:

        • Previous roles might include Data Engineer, Data Systems Developer, or similar positions with a focus on building and maintaining scalable, reliable data infrastructures.
        • Strong experience in API development, integration, and management. Proficient in RESTful and SOAP services, with a solid understanding of API security best practices
        • Experience in a fast-paced, high-growth environment, demonstrating the ability to adapt to changing needs and technologies.

        Get hired sooner:

        • At Creative CFO we build from the bottom up, and a key component of this is a solid technical understanding of the role and it's requirements.
        • To expedite our hiring process and show yourself as a standout hire, please consider completing our 60min technical assessment below:
        • https://app.testgorilla.com/s/i5v3ys17

        Why Apply

        Vibrant Community

        • Be part of a close-knit, vibrant community that fosters creativity, wellness, and regular team-building events.
        • Celebrate individual contributions to team wins, fostering a culture of recognition.

        Innovative Leadership

        • Work under forward-thinking leadership shaping the future of data analytics.
        • Receive intentional mentorship for your professional and personal development.

        Education and Growth

        • Receive matched pay on education spend and extra leave days for ongoing education.
        • Enjoy a day's paid leave on your birthday - a celebration of you!

        Hybrid Work Setup

        • Experience the flexibility of a hybrid work setup, with currently one in-office day per month.
        • Choose to work in a great office space, if preferred.

        Professional and Personal Resources

        • Use the best technology, provided by the company
        • Benefit from Parental and Maternity Leave policies designed for our team members.

        See more jobs at Creative CFO (Pty) Ltd

        Apply for this job

        +30d

        Data Engineer

        BrushfireFort Worth, TX, Remote
        DevOPSBachelor's degreetableausqlFirebaseazurec++typescriptkubernetespython

        Brushfire is hiring a Remote Data Engineer

        Job Description

        The primary responsibility of this position is to manage our data warehouse and the pipelines that feed to/from that warehouse. This requires advanced knowledge of our systems, processes, data structures, and existing tooling. The secondary responsibility will be administering our production OLTP database to ensure it runs smoothly and using standard/best practices.

        The ideal candidate for this position is someone who is extremely comfortable with the latest technology, trends, and favors concrete execution over abstract ideation. We are proudly impartial when it comes to solutions – we like to try new things and the best idea is always the winning idea, regardless of the way we’ve done it previously.

        This is a full-time work-from-home position.

        Qualifications

        The following characteristics are necessary for a qualified applicant to be considered:

        • 3+ years of experience working with data warehouses (BigQuery preferred, Redshift, Snowflake, etc)

        • 3+ years of experience working with dbt, ETL (Fivetran, Airbyte, etc), and Reverse ETL (Census) solutions 

        • 3+ years of experience with Database Administration (Azure SQL, Query Profiler, Redgate, etc)

        • 1+ years of experience with BI/Visualization tools (Google Data/Looker Studio, Tableau, etc)

        • Experience with PubSub databases, specifically Google Cloud Firestore and Firebase Realtime Database

        • Experience with Github (or other similar version control systems) and CI/CD pipeline tools like Azure Devops and Github actions

        • Ability to communicate fluently, pleasantly, and effectively—both orally and in writing, in the English language—with customers and co-workers.

        • Concrete examples and evidence of work product and what role the applicant played in it

        The following characteristics are not necessary but are highly desired:

        • Experience with Kubernetes, C#, TypeScript, Python

        • Bachelor's degree or higher in computer science or related technical field

        • Ability to contribute to strategic and planning sessions around architecture and implementation

        See more jobs at Brushfire

        Apply for this job

        +30d

        Senior Data Engineer

        EquipmentShareRemote; Chicago; Denver; Kansas City; Columbia MO
        LambdaagileairflowsqlDesignc++postgresqlpythonAWS

        EquipmentShare is hiring a Remote Senior Data Engineer

        EquipmentShare is Hiring a Senior Data Engineer.

        Your role in our team

        At EquipmentShare, we believe it’s more than just a job. We invest in our people and encourage you to choose the best path for your career. It’s truly about you, your future, and where you want to go.

        We are looking for a Senior Data Engineer to help us continue to build the next evolution of our data platform in a scalable, performant, and customer-centric architecture.

        Our main tech stack includes Snowflake, Apache Airflow, AWS cloud infrastructure (e.g., Kinesis, Kubernetes/EKS, Lambda, Aurora RDS PostgreSQL), Python and Typescript.

        What you'll be doing

        We are typically organized into agile cross-functional teams composed of Engineering, Product, and Design, which allows us to develop deep expertise and rapidly deliver high-value features and functionality to our next-generation T3 Platform.

        You’ll be part of a close-knit team of data engineers developing and maintaining a data platform built with automation and self-service in mind to support analytics and machine learning data products for the next generation of our T3 Fleet that enableend-users to track, monitor and manage the health of their connected vehicles and deployed assets. 

        We'll be there to support you as you become familiar with our teams, product domains, tech stack and processes — generally how we all work together.

        Primary responsibilities for a Senior Data Engineer

        • Collaborate with Product Managers, Designers, Engineers, Data Scientists and Data Analysts to take ideas from concept to production at scale.
        • Design, build and maintain our data platform to enable automation and self-service for data scientists, machine learning engineers and analysts.
        • Design, build and maintain data product framework to support EquipmentShare application data science and analytics features.
        • Design, build and maintain CI/CD pipelines and automated data and machine learning deployment processes.
        • Develop data monitoring and alerting capabilities.
        • Document architecture, processes and procedures for knowledge sharing and cross-team collaboration.
        • Mentor peers to help them build their skills.

        Why We’re a Better Place to Work

        We can promise that every day will be a little different with new ideas, challenges and rewards.

        We’ve been growing as a team and we are not finished just yet— there is plenty of opportunity to shape how we deliver together.

        Our missionis to enable the construction industry with tools that unlock substantial increases to productivity. Together with our team and customers, we are building the future of construction.

        T3is the only cloud-based operating system that brings together construction workflows & data from constantly moving elements in one place.

        • Competitive base salary and market leading equity package.
        • Unlimited PTO.
        • Remote first.
        • True work/life balance.
        • Medical, Dental, Vision and Life Insurance coverage.
        • 401(k) + match.
        • Opportunities for career and professional development with conferences, events, seminars and continued education.
        • On-site fitness center at the Home Office in Columbia, Missouri, complete with weightlifting machines, cardio equipment, group fitness space, racquetball courts, a climbing wall, and much more!
        • Volunteering and local charity support that help you nurture and grow the communities you call home through our Giving Back initiative.
        • Stocked breakroom and full kitchen with breakfast and lunch provided daily by our chef and kitchen crew.

        About You

        You're a hands-on developer who enjoys solving complex problems and building impactful solutions.  Most importantly, you care about making a difference.

        • Take the initiative to own outcomes from start to finish — knowing what needs to be accomplished within your domain and how we work together to deliver the best solution.
        • You are passionate about developing your craft — you understand what it takes to build quality, robust and scalable solutions.
        • You’ll see the learning opportunity when things don’t quite go to plan — not only for you but for how we continuously improve as a team.
        • You take a hypothesis-driven approach — knowing how to source, create and leverage data to inform decision making, using data to drive how we improve, to shape how we evaluate and make platform recommendations.

        So, what is important to us?

        Above all, you’ll get stuff done. More importantly, you’ll collaborate to do the right things in the right wayto achieve the right outcomes.

        • 7+ years of relevant data platform development experience building production-grade solutions.
        • Proficient with SQL and a high-order object-oriented language (e.g., Python).
        • Experience with designing and building distributed data architecture.
        • Experience building and managing production-grade data pipelines using tools such as Airflow, dbt, DataHub, MLFlow
        • Experience building and managing production-grade data platforms using distributed systems such as Kafka, Spark, Flink and/or others.
        • Familiarity with event data streaming at scale.
        • Proven track record learning new technologies and applying that learning quickly.
        • Experience building observability and monitoring into data products. 
        • Motivated to identify opportunities for automation to reduce manual toil.

        EquipmentShare is committed to a diverse and inclusive workplace. EquipmentShare is an equal opportunity employer and does not discriminate on the basis of race, national origin, gender, gender identity, sexual orientation, protected veteran status, disability, age, or other legally protected status.

         

        #LI-Remote

         

        See more jobs at EquipmentShare

        Apply for this job

        +30d

        Data Engineer

        KalderosRemote, United States
        Bachelor's degreeslackazurec++

        Kalderos is hiring a Remote Data Engineer

        About Us

        At Kalderos, we are building unifying technologies that bring transparency, trust, and equity to the entire healthcare community with a focus on pharmaceutical pricing.  Our success is measured when we can empower all of healthcare to focus more on improving the health of people. 

        That success is driven by Kalderos’ greatest asset, our people. Our team thrives on the problems that we solve, is driven to innovate, and thrives on the feedback of their peers. Our team is passionate about what they do and we are looking for people to join our company and our mission.

        That’s where you come in! 

        What You’ll Do:

        For the position, we’re looking for a Data Engineer II to solve complex problems in the Drug Discounting space. Across all roles, we look for future team members who will live our values of Collaboration, Inspired, Steadfast, Courageous, and Excellence. 

        We’re a team of people striving for the best, so naturally, we want to hire the best! We know that job postings can be intimidating, and don’t expect any candidate to check off every box we have listed (though if you can, AWESOME!). We encourage you to apply if your experience matches about 70% of this posting.

        • Work with product teams to understand and develop data models that can meet requirements and operationalize well
        • Build out automated ETL jobs that reliably process large amounts of data, and ensure these jobs runs consistently and well
        • Build tools that enable other data engineers to work more efficiently
        • Try out new data storage and processing technologies in proof of concepts and make recommendations to the broader team
        • Tune existing implementations to run more efficiently as they become bottlenecks, or migrate existing implementations to new paradigms as needed
        • Learn and apply knowledge about the drug discount space, and become a subject matter expert for internal teams to draw upon


        General Experience Guidelines

        We know your experience extends beyond what you have on paper. The following is a guideline of general experience we’re looking for in this role:

        • Bachelor’s degree in computer science or similar field
        • 4+ years work experience as a Data Engineer in a professional full-time role
        • Experience building ETL pipelines and other services for the healthcare industry 
        • Managing big data implementations – have worked on various implementations for how to scale vertically and horizontally data implementations that manage millions of records. 

        Set Yourself Apart

        • Experience with dbt and Snowflake
        • Professional experience in application programming with an object oriented language. 
        • Experience with streaming technologies such as Kafka or Event Hubs 
        • Experience with orchestration frameworks like Azure Data Factory
        • Experience in the healthcare or pharmaceutical industries
        • Experience in a startup environment 

         

        Anticipated compensation range for this role is $110-150K/year USD plus bonus.

        ____________________________________________________________________________________________

        Highlighted Company Perks and Benefits

        • Continuous growth and development: Annual continuing education stipend supporting all employees on their ongoing knowledge journey.
        • Celebrating employee milestones: bi-annual stipend for all full-time employees to help you celebrate your exciting accomplishments throughout the year.
        • Work From Home Reimbursement: quarterly funds provided through the pandemic to help ensure all employees have what they need to be productive and effective while working from home.
        • A fair PTO system that allows for a healthy work-life balance. There’s no maximum, but there is a minimum; we want you to take breaks for yourself.
        • 401K plan with a company match.
        • Choose your own Technology: We’ll pay for your work computer. 


        What It’s Like Working Here

        • Competitive Salary, Bonus, and Equity Compensation. 
        • We thrive on collaboration, because we believe that all voices matter and we can only put our best work into the world when we work together to solve problems.
        • We empower each other: from our DEI Council to affinity groups for underrepresented populations we believe in ensuring all voices are heard.
        • We know the importance of feedback in individual and organizational growth and development, which is why we've embedded it into our practice and culture. 
        • We’re curious and go deep. Our slack channel is filled throughout the day with insightful articles, discussions around our industry, healthcare, and our book club is always bursting with questions.

        To learn more:https://www.kalderos.com/company/culture

        Kalderos is an equal opportunity workplace.  We are committed to equal opportunity regardless of race, color, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, or veteran status.

         

        See more jobs at Kalderos

        Apply for this job

        +30d

        Senior Data Engineer

        Modern HealthRemote - US
        DevOPSMaster’s DegreeterraformscalanosqlsqlDesignmongodbazurejavadockerpostgresqlMySQLkubernetespythonAWS

        Modern Health is hiring a Remote Senior Data Engineer

        Modern Health 

        Modern Healthis a mental health benefits platform for employers. We are the first global mental health solution to offer employees access to one-on-one, group, and self-serve digital resources for their emotional, professional, social, financial, and physical well-being needs—all within a single platform. Whether someone wants to proactively manage stress or treat depression, Modern Health guides people to the right care at the right time. We empower companies to helpalltheir employees be the best version of themselves, and believe in meeting people wherever they are in their mental health journey.

        We are a female-founded company backed by investors like Kleiner Perkins, Founders Fund, John Doerr, Y Combinator, and Battery Ventures. We partner with 500+ global companies like Lyft, Electronic Arts, Pixar, Clif Bar, Okta, and Udemy that are taking a proactive approach to mental health care for their employees. Modern Health has raised more than $170 million in less than two years with a valuation of $1.17 billion, making Modern Health the fastest entirely female-founded company in the U.S. to reach unicorn status. 

        We tripled our headcount in 2021 and as a hyper-growth company with a fully remote workforce, we prioritize our people-first culture (winning awards including Fortune's Best Workplaces in the Bay Area 2021). To protect our culture and help our team stay connected, we require overlapping hours for everyone. While many roles may function from anywhere in the world—see individual job listing for more—team members who live outside the Pacific time zone must be comfortable working early in the morning or late at night; all full-time employees must work at least six hours between 8 am and 5 pm Pacific time each workday. 

        We are looking for driven, creative, and passionate individuals to join in our mission. An inclusive and diverse culture are key components of mental well-being in the workplace, and that starts with how we build our own team. If you're excited about a role, we'd love to hear from you!

        The Role

        The Senior Data Engineer will play a critical role in designing, developing, and maintaining our data infrastructure. This role requires a deep understanding of data architecture, data modeling, and ETL processes. The ideal candidate will have extensive experience with big data technologies, cloud platforms, and a strong ability to collaborate with cross-functional teams to deliver high-quality data solutions.

        This position is not eligible to be performed in Hawaii.

        What You’ll Do

        • Design, develop, and maintain scalable data pipelines and ETL processes to support data integration and analytics
        • Architect and implement data storage solutions, including data warehouses, data lakes, and databases
        • Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions that meet business needs
        • Optimize and tune data systems for performance, reliability, and scalability
        • Ensure data quality and integrity through rigorous testing, validation, and monitoring
        • Develop and enforce data governance policies and best practices
        • Stay current with emerging data technologies and industry trends, and evaluate their potential impact on our data infrastructure
        • Troubleshoot and resolve data-related issues in a timely manner

        Our Stack

        • AWS RDS
        • Snowflake
        • Fivetran
        • DBT
        • Prefect
        • Looker
        • Datadog

        Who You Are

        • Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related field
        • 5+ years of experience in data engineering in a modern tech stack
        • Proficiency in programming languages such as Python, Java, or Scala
        • Strong experience with big data technologies
        • Expertise in SQL and experience with relational databases (e.g., MySQL, PostgreSQL) and NoSQL databases (e.g., MongoDB, Cassandra)
        • Hands-on experience with cloud platforms such as AWS, Azure, or Google Cloud
        • Familiarity with data warehousing solutions like Redshift, BigQuery, or Snowflake
        • Knowledge of data modeling, data architecture, and data governance principles
        • Experience with IaaS technologies (e.g. terraform)
        • Excellent problem-solving skills and attention to detail
        • Strong communication and collaboration skills

        Bonus Points

        • Experience with containerization and orchestration tools like Docker and Kubernetes
        • Knowledge of machine learning and data science workflows
        • Experience with CI/CD pipelines and DevOps practices
        • Certification in cloud platforms (e.g., AWS Certified Data Analytics, Google Professional Data Engineer)

        Benefits

        Fundamentals:

        • Medical / Dental / Vision / Disability / Life Insurance 
        • High Deductible Health Plan with Health Savings Account (HSA) option
        • Flexible Spending Account (FSA)
        • Access to coaches and therapists through Modern Health's platform
        • Generous Time Off 
        • Company-wide Collective Pause Days 

        Family Support:

        • Parental Leave Policy 
        • Family Forming Benefit through Carrot
        • Family Assistance Benefit through UrbanSitter

        Professional Development:

        • Professional Development Stipend

        Financial Wellness:

        • 401k
        • Financial Planning Benefit through Origin

        But wait there’s more…! 

        • Annual Wellness Stipend to use on items that promote your overall well being 
        • New Hire Stipend to help cover work-from-home setup costs
        • ModSquad Community: Virtual events like active ERGs, holiday themed activities, team-building events and more
        • Monthly Cell Phone Reimbursement

        Equal Pay for Equal Work Act Information

        Please refer to the ranges below to find the starting annual pay range for individuals applying to work remotely from the following locations for this role.


        Compensation for the role will depend on a number of factors, including a candidate’s qualifications, skills, competencies, and experience and may fall outside of the range shown. Ranges are not necessarily indicative of the associated starting pay range in other locations. Full-time employees are also eligible for Modern Health's equity program and incredible benefits package. See our Careers page for more information.

        Depending on the scope of the role, some ranges are indicative of On Target Earnings (OTE) and includes both base pay and commission at 100% achievement of established targets.

        San Francisco Bay Area
        $138,500$162,900 USD
        All Other California Locations
        $138,500$162,900 USD
        Colorado
        $117,725$138,500 USD
        New York City
        $138,500$162,900 USD
        All Other New York Locations
        $124,650$146,600 USD
        Seattle
        $138,500$162,900 USD
        All Other Washington Locations
        $124,650$146,600 USD

        Below, we are asking you to complete identity information for the Equal Employment Opportunity Commission (EEOC). While we are required by law to ask these questions in the format provided by the EEOC, at Modern Health we know that gender is not binary, and we recognize that these categories do not reflect our employees' full range of identities.

        See more jobs at Modern Health

        Apply for this job

        +30d

        Data Engineer II

        Agile SixUnited States, Remote
        MLagileDesignapigitc++pythonbackend

        Agile Six is hiring a Remote Data Engineer II

        Agile Six is a people-first, remote-work company that serves shoulder-to-shoulder with federal agencies to find innovative, human-centered solutions. We build better by putting people first. We are animated by our core values of Purpose, Wholeness, Trust, Self-Management and Inclusion. We deliver our solutions in autonomous teams of self-managed professionals (no managers here!) who genuinely care about each other and the work. We know that’s our company’s purpose – and that we can only achieve it by supporting a culture where people feel valued, self-managed, and love to come to work.

        The role

        Agile Six is looking for a Data Engineer for an anticipated role on our cross-functional agile teams. Our partners include: the Department of Veteran Affairs (VA), Centers for Medicare & Medicaid Services (CMS), Centers for Disease Control and Prevention (CDC) and others. 

        The successful candidate will bring their experience in data formatting and integration engineering to help us expand a reporting platform. As part of the team, you will primarily be responsible for data cleaning and data management tasks, building data pipelines, and data modeling (designing the schema/structure of datasets and relationships between datasets). We are looking for someone who enjoys working on solutions to highly complex problems and someone who is patient enough to deal with the complexities of navigating the Civic Tech space. The successful candidate for this role is an excellent communicator, as well as someone who is curious about where data analysis, backend development, data engineering, and data science intersect.

        We embrace open source software and an open ethos regarding software development, and are looking for a candidate who does the same. Most importantly, we are looking for someone with a passion for working on important problems that have a lasting impact on millions of users and make a difference in our government!

        Please note, this position is anticipated, pending contract award response.

        Responsibilities

        • Contribute as a member of a cross functional Agile team using your expertise in data engineering, critical thinking, and collaboration to solve problems related to the project
          • Experience with Java/Kotlin/Python, command line, and Git is required
          • Experience with transport protocols including: REST, SFTP, SOAP is required
          • Experience with HL7 2.5.1 and FHIR is strongly preferred
        • Extract, transform, and load data. Pull together datasets, build data pipelines, and turn semi-structured and unstructured data into datasets that can be used for machine learning models.
        • Evaluate and recommend
        • We expect the responsibilities of this position to shift and grow organically over time, in response to considerations such as the unique strengths and interests of the selected candidate and other team members and an evolving understanding of the delivery environment.

        Basic qualifications

        • 2+ years of hands-on data engineering experience in a production environment
        • Experience with Java/Kotlin/Python, command line, and Git
        • Demonstrated experience with extract, transform, load (ETL) and data cleaning, data manipulation, and data management
        • Demonstrated experience building and orchestrating automated data pipelines in Java/Python
        • Experience with data modeling: defining the schema/structure of datasets and the relationships between datasets
        • Ability to create usable datasets from semi-structured and unstructured data
        • Solution-oriented mindset and proactive approach to solving complex problems
        • Ability to be autonomous, take initiative, and effectively communicate status and progress
        • Experience successfully collaborating with cross-functional partners and other designers and researchers, seeking and providing feedback in an Agile environment
        • Adaptive, empathetic, collaborative, and holds a positive mindset
        • Has lived and worked in the United States for 3 out of the last 5 years
        • Some of our clients may request or require travel from time to time. If this is a concern for you, we encourage you to apply and discuss it with us at your initial interview

        Additional desired qualifications

        • Familiarity with the Electronic Laboratory Reporting workflows and data flow
        • Knowledge of FHIR data / API standard, HL7 2.5.1
        • Experience building or maintaining web service APIs
        • Familiarity with various machine learning (ML) algorithms and their application to common ML problems (e.g. regression, classification, clustering)
        • Statistical experience or degree
        • Experience developing knowledge of complex domain and systems
        • Experience working with government agencies
        • Ability to work across multiple applications, components, languages, and frameworks
        • Experience working in a cross-functional team, including research, design, engineering, and product
        • You are a U.S. Veteran. As a service-disabled veteran-owned small business, we recognize the transition to civilian life can be tricky, and welcome and encourage Veterans to apply

        At Agile Six, we are committed to building teams that represent a variety of backgrounds, perspectives, and skills. Even if you don't meet every requirement, we encourage you to apply. We’re eager to meet people who believe in our mission and who can contribute to our team in a variety of ways.

        Salary and Sixer Benefits

        To promote equal pay for equal work, we publish salary ranges for each position.

        The salary range for this position is $119,931-$126,081

        Our benefits are designed to reinforce our core values of Wholeness, Self Management and Inclusion. The following benefits are available to all employees. We respect that only you know what balance means for your life and season. While we offer support from coaches, we expect you to own your wholeness, show up for work whole, and go home to your family the same. You will be seen, heard and valued. We expect you to offer the same for your colleagues, be kind (not controlling), be caring (not directive) and ready to participate in a state of flow. We mean it when we say “We build better by putting people first”.

        All Sixers Enjoy:

        • Self-managed work/life balance and flexibility
        • Competitive and equitable salary (equal pay for equal work)
        • Employee Stock Ownership (ESOP) for all employees!
        • 401K matching
        • Medical, dental, and vision insurance
        • Employer paid short and long term disability insurance
        • Employer paid life insurance
        • Self-managed and generous paid time off
        • Paid federal holidays and Election day off
        • Paid parental leave
        • Self-managed professional development spending
        • Self-managed wellness days

        Hiring practices

        Agile Six Applications, Inc. is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, national origin, ancestry, sex, sexual orientation, gender identity or expression, religion, age, pregnancy, disability, work-related injury, covered veteran status, political ideology, marital status, or any other factor that the law protects from employment discrimination.

        Note: We participate in E-Verify. Upon hire, we will provide the federal government with your Form I-9 information to confirm that you are authorized to work in the U.S. Unfortunately, we are unable to sponsor visas at this time.

        If you need assistance or reasonable accommodation in applying for any of these positions, please reach out to careers@agile6.com. We want to ensure you have the ability to apply for any position at Agile Six.

        Please read and respond to the application questions carefully. Interviews are conducted on a rolling basis until the position has been filled.

         

        Apply for this job