ML Remote Jobs

285 Results

+30d

Lead Decision Scientist, Machine Learning Engineer, Colombia 2024

Aimpoint DigitalMedellin, CO - Remote
MLDevOPSagileterraformsqlDesignazurescrumgitkubernetespythonAWS

Aimpoint Digital is hiring a Remote Lead Decision Scientist, Machine Learning Engineer, Colombia 2024

Aimpoint Digital is a premier analytics consulting firm with a mission to drive business value for clients through expertise in data strategy, data analytics, decision sciences, and data engineering and infrastructure. This position is within our decision sciences practice which focuses on delivering solutions via machine learning and statistical modelling.

What you will do

As a part of Aimpoint Digital, you will focus on enabling clients to get the most out of their data. You will work with all levels of the client organization to build value driving solutions that extract insights and then train them on how to manage and maintain these solutions. Typical solutions will utilize machine learning, artificial intelligence, statistical analysis, automation, optimization, and/or data visualizations. As a Lead Data Scientist, with a Databricks focus, you will be expected to work independently on client engagements, take part in the development of our practice, aid in business development, and contribute innovative ideas and initiatives to our company. As a Lead Data Scientist you will:

  • Become a trusted advisor working with clients to design end-to-end analytical solutions
  • Work independently to solve complex data science use-cases across various industries
  • Design and develop feature engineering pipelines, build ML & AI infrastructure, deploy models, and orchestrate advanced analytical insights
  • Write code in SQL, Python, and Spark following software engineering best practices
  • Collaborate with stakeholders and customers to ensure successful project delivery

Who we are looking for

We are looking for collaborative individuals who want to drive value, work in a fast-paced environment, and solve real business problems. You are a coder who writes efficient and optimized code leveraging key Databricks features. You are a problem-solver who can deliver simple, elegant solutions as well as cutting-edge solutions that, regardless of complexity, your clients can understand, implement, and maintain. You genuinely think about the end-to-end machine learning pipeline as you generate robust solutions. You are both a teacher and a student as we enable our clients, upskill our teammates, and learn from one another. You want to drive impact for your clients and do so through thoughtfulness, prioritization, and seeing a solution through from brainstorming to deployment. In particular you have these traits:

  • Databricks experience is required.
  • Degree in Computer Science, Engineering, Mathematics, or equivalent experience.
  • Experience with building high quality Data Science models using Databricks ML to solve client's business problems
  • Experience in deploying models via model serving within Databricks
  • Experience with managing stakeholders and collaborating with customers
  • Strong written and verbal communication skills required
  • Ability to manage an individual workstream independently
  • 3+ years of experience developing ML models in any platform (Azure, AWS, GCP, Databricks etc.)
  • Ability to apply data science methodologies and principles to real life projects
  • Expertise in software engineering concepts and best practices
  • Self-starter with excellent communication skills, able to work independently, and lead projects, initiatives, and/or people
  • Willingness to travel.

Preferred Qualifications

  • Consulting Experience
  • Databricks Machine Learning Associate or Machine Learning Professional Certification.
  • Familiarity with traditional machine learning tools such as Python, SKLearn, XGBoost, SparkML, etc.
  • Experience with deep learning frameworks like TensorFlow or PyTorch.
  • Knowledge of ML model deployment options (e.g., Azure Functions, FastAPI, Kubernetes) for real-time and batch processing.
  • Experience with CI/CD pipelines (e.g., DevOps pipelines, Git actions).
  • Knowledge of infrastructure as code (e.g., Terraform, ARM Template, Databricks Asset Bundles).
  • Understanding of advanced machine learning techniques, including graph-based processing, computer vision, natural language processing, and simulation modeling.
  • Experience with generative AI and LLMs, such as LLamaIndex and LangChain
  • Understanding of MLOps or LLMOps.
  • Familiarity with Agile methodologies, preferably Scrum

We are actively seeking candidates for full-time, remote work within the US, the UK, or Colombia.

See more jobs at Aimpoint Digital

Apply for this job

+30d

Azure Administrator

Tiger AnalyticsUnited States, Remote
MLagileazuregitkubernetespython

Tiger Analytics is hiring a Remote Azure Administrator

Tiger Analytics is advanced analytics consulting firm. We are the trusted analytics partner for several Fortune 100 companies, enabling them to generate business value from data. Our consultants bring deep expertise in Data Science, Machine Learning and AI. Our business value and leadership has been recognized by various market research firms, including Forrester and Gartner.

Tiger Analytics team is looking for a strategic-minded technology leader with a strong track record of identifying gaps, defining roadmaps, and seeing solutions through from ideation to fruition. This role will provide technical solution to multiple engineering and operation teams, develop platform architecture blueprints, guide the development of junior engineers into individual-contributor leaders, and bridge the gaps between platform teams with application and central architecture teams. The ideal candidate will have demonstrated experience in software engineering, architecture, and large-scale delivery in a fast-paced, agile environment.

Requirements

The Azure Administrator will be responsible for providing technical expertise in deploying applications on Azure platform.

He/ She must be self-motivated and apply knowledge of Azure to drive solutions, support the development team, and create documentation to support and describe technical solutions.

1. 2+ years hands-on experience using Teraform automation templates or manual approach

to deploy Azure PaaS services with specialization in PaaS resource provisioning in any of the

Azure PaaS services. I.e., either

  • a. Azure DataBricks
  • b. Azure ML Ops
  • c. Azure Storage, ADF, Key Vault, Log Analytics
  • d. Azure Kubernetes
  • e. Application Service Environment, Web and Function Apps

2. 1-2 years hands-on experience with Azure network (creation of private and service

endpoints, NSG rules, routing, firewalls, internal and external web applications, Azure

VNets, Subnets, Azure network settings, CIDR address blocks, DNS settings, security policies

3. 1-2 years hands on experience with Azure DevOps

4. 1-2 years hands on experience with scripting (PowerShell, Python, Azure CLI)

5. Experience with web security; Understanding Fundamentals of firewalls and ports

6. Experience with infrastructure monitoring and logging tools

7. Experience configuring and monitoring Azure PaaS services, Git, GitHub

8. Ability to pick up ITSM tools like ServiceNow


This position offers an excellent opportunity for significant career development in a fast-growing and challenging entrepreneurial environment with a high degree of individual responsibility.

See more jobs at Tiger Analytics

Apply for this job

+30d

Engineering Manager, Data Engineering

GrammarlyGermany; Hybrid
MLremote-firstsqlDesignpython

Grammarly is hiring a Remote Engineering Manager, Data Engineering

Grammarly is excited to offer aremote-first hybrid working model. Grammarly team members in this role must be based in Germany, and, depending on business needs, they must meet in person for collaboration weeks, traveling if necessary to the hub(s) where their team is based.


This flexible approach gives team members the best of both worlds: plenty of focus time along with in-person collaboration that fosters trust and unlocks creativity.

About Grammarly

Grammarly is the world’s leading AI writing assistance company trusted by over 30 million people and 70,000 teams. From instantly creating a first draft to perfecting every message, Grammarly helps people at 96% of theFortune 500 and teams at companies like Atlassian, Databricks, and Zoom get their point across—and get results—with best-in-class security practices that keep data private and protected. Founded in 2009, Grammarly is No. 14 on the Forbes Cloud 100, one of TIME’s 100 Most Influential Companies, one of Fast Company’s Most Innovative Companies in AI, and one of Inc.’s Best Workplaces.

The opportunity 

To achieve our ambitious objectives, we are seeking an Engineering Manager to lead, scale, mentor, and drive our Data Engineering Team. This individual will be responsible for guiding a skilled team of software engineers who are focused on constructing analytical data models, pipelines, and innovative tools. They will collaborate with various teams across the company.

This person will be an integral part of the larger data organization, reporting directly to the Director of Data Engineering based in the US, and they’ll have the opportunity to influence decisions and the direction of our overall data platform, including infrastructure and analytics engineering.

Grammarly’s engineers and researchers have the freedom to innovate and uncover breakthroughs—and, in turn, influence our product roadmap. The complexity of our technical challenges is growing rapidly as we scale our interfaces, algorithms, and infrastructure. You can hear more from our team on our technical blog.

As the Manager of the Data Engineering team, you will co-own the company-wide data lake and set the vision for data schemas, analytical tooling, and pipelines. Our cutting-edge data lake is the central hub for all data producers and consumers, and your impact will span all of Grammarly.

Data engineers make disparate data centrally available to Grammarly’s team members while ensuring query efficiency and efficacy. Your team will support all of Grammarly’s functions to help us successfully operate three lines of business: individual consumers, enterprise, and Grammarly’’s developer platform. Communication, stakeholder management, and a passion for data will be the backbone of Grammarly’s Data Engineering Manager’s success.

In this role, you will:

  • Build a highly specialized engineering team to support the growing needs and complexity of our product and business organizations. 
  • Oversee the maintenance and enhancement of our data lake, ensuring it meets the needs of all data producers and consumers.
  • Drive the implementation of robust data pipelines to ensure data is accurate, accessible, and up-to-date.
  • Foster a collaborative and high-performance culture within the team.
  • Set technical vision for the team and ensure the scalability, low cost, low latency, and versatility of our internal analytics platform.
  • Provide hands-on guidance in coding, design, and debugging to ensure high-quality technical solutions.
  • Build strong relationships with all of Grammarly’s functions to generalize data needs and guide the roadmaps of our internal platforms.
  • Work closely with partner teams, including Data Science, ML, and Analytics Engineering, to align priorities and deliver integrated solutions.
  • Stay up-to-date with industry trends and best practices in data engineering, fostering a culture of continuous improvement and innovation.
  • Cultivate an ownership mindset and culture on your team and across product teams: provide the necessary metrics to help us understand what is working, what is not, and how to fix it.
  • Set high performance and quality standards and coach team members to meet them; mentor and grow junior and senior IC talent.

Qualifications

  • Has 7+ years of experience in data engineering, with at least 3 years in a leadership or managerial role.
  • Fluent in data engineering technologies and languages like SQL, Python, Spark, and data pipeline frameworks.
  • Demonstrates ability to lead and mentor a team. Strong project management skills and a track record of delivering complex projects.
  • Has excellent analytical and problem-solving skills. Ability to dive deep into technical issues and provide effective solutions.
  • Is able to guide the team to build optimal data models and efficiently utilize our data infrastructure.
  • Has strong communication and interpersonal skills. Experience working in a collaborative, cross-functional environment, including with remote teams.
  • Has the ability and desire to operate in a fast-paced, dynamic environment where things change quickly.
  • Leads by setting well-understood goals and sharing the appropriate level of context for maximum autonomy but is also deeply technical and can dive in to help when necessary.
  • Embodies our EAGER values—is ethical, adaptable, gritty, empathetic, and remarkable.
  • Is inspired by our MOVE principles: move fast and learn faster; obsess about creating customer value; value impact over activity; and embrace healthy disagreement rooted in trust.
  • Is able to meet in person for their team’s scheduled collaboration weeks, traveling if necessary to the hub where their team is based.

Support for you, professionally and personally

  • Professional growth:We believe that autonomy and trust are key to empowering our team members to do their best, most innovative work in a way that aligns with their interests, talents, and well-being. We also support professional development and advancement with training, coaching, and regular feedback.
  • A connected team: Grammarly builds a product that helps people connect, and we apply this mindset to our own team. Our remote-first hybrid model enables a highly collaborative culture supported by our EAGER (ethical, adaptable, gritty, empathetic, and remarkable) values. We work to foster belonging among team members in a variety of ways. This includes our employee resource groups, Grammarly Circles, which promote connection among those with shared identities including BIPOC and LGBTQIA+ team members, women, and parents. We also celebrate our colleagues and accomplishments with global, local, and team-specific programs. 
  • Comprehensive benefits for candidates based in Germany:Grammarly offers all team members competitive pay along with a benefits package encompassing life care (including mental health care and risk benefits) and ample and defined time off. We also offer support to set up a home office, wellness and pet care stipends, learning and development opportunities, and more.

We encourage you to apply

At Grammarly, we value our differences, and we encourage all to apply. Grammarly is an equal-opportunity company. We do not discriminate on the basis of race or ethnic origin, religion or belief, gender, disability, sexual identity, or age.

For more details about the personal data Grammarly collects during the recruitment process, for what purposes, and how you can address your rights, please see the Grammarly Data Privacy Notice for Candidates here

#LI-AD3

#LI-Hybrid

 

Apply for this job

+30d

Software Engineer, Principal

ProgressHybrid Remote, Hyderabad, India
MLDesignlinux

Progress is hiring a Remote Software Engineer, Principal

We are Progress (Nasdaq: PRGS) - an experienced, trusted provider of products designed with customers in mind so they can develop the applications they need, deploy where and how they want, and manage it all safely and securely.   
We’re proud to have a diverse, global team where we value the individual and enrich our culture by considering varied perspectives because we believe people power progress. Join us as aSoftware Engineer, Principal 2and help us do what we do best: propelling business forward.  
 
In this role you will:
  • Security Policy Implementation: Implement and enforce security policy requirements, conduct risk assessments, and conduct vulnerability testing. 
  • Security Engineering & Assurance: The Security Engineering & Assurance role involves conducting design and architecture reviews, threat modeling, secure code reviews, and cryptographic reviews to ensure robust security measures. Additionally, the position supports the open source ecosystem, engages in platform security engineering, and augments internal security engineering efforts. The role also includes research and development activities, particularly in compilers and binary translation research, to advance security technologies and practices.  
  • Technical Expertise: Provide technical direction to engineering teams on various security areas, including network security, platform security, authentication/authorization systems, application security, and security frameworks. 
  • Engineering Initiatives: Take leadership of security engineering initiatives for production and corporate infrastructure. 
  • Subject Matter Expertise: Serve as an information security engineering subject matter expert, analyze attacks on customer applications from internal and external sources, proposing mitigations and fixes. 
  • Incident Management: Manage security vulnerability resolution according to company policies. This requires immediate response and working with affected teams to investigate and mitigate/remediate the vulnerabilities. Clear communication skills are critical. 
  • Real-Time Response: Ensure timely and effective responses to security incidents. This involves coordinating with incident response teams, analyzing threat data, and implementing mitigation measures. 
  • Continuous Improvement: Stay informed about emerging threats and lead changes to security processes accordingly. Regularly assess and propose changes, that lead to improving the effectiveness of security operations.  
  • Collaboration: Work closely with other internal and customer security professionals, including network engineers, system administrators, and threat analysts.  
Your background:
  • Bachelor’s or equivalent industry experience in Software Engineering, Information Security, or related fields.  
  • Business Application security patterns
  • Choosing and applying Cryptography for confidentiality, integrity, and availability
  • Software Security engineering best practices
  • Authentication, authorization, and network security protocols
  • Linux OS system security features and best practices
  • Windows OS system security features and best practices 
  • Knowledge of secure software development practices across distributed, container, and private/public cloud computing environments  
  • Familiarity with network security devices, and security software product solutions.  
  • Knowledge of Machine Learning practices on creating the standards against which ML (and AI) projects using Large Language Models & RAG can be reviewed, and creating tools and techniques that help researchers assure the safety and security of the systems.  
  • 7+ years of experience with security operations systems (e.g., IDS, SIEM, anti-virus log collection systems).  
  • Certifications: Industry certifications like CISSP, CISA, CEH, or GSEC are desirable.
If this sounds like you and fits your experience and career goals, we’d be happy to chat.
What we offer in return is the opportunity to experience a great company culture with
wonderful colleagues to learn from and collaborate with and to enjoy:  
 
Here at Progress, we truly care about your employee experience. It is important to us for our employees to balance their work and home life, obtain viable options for their health and wellness, grow their career, and plan for financial success.
  • 30 days of earned leaves plus an extra day off for your birthday, various other leaves like Marriage leave ,Casual leave, Maternity leave , Paternity Leave
  • Premium Group medical Insurance for employee and 5 dependents ,Personal accident insurance coverage, Life insurance coverage
  • Professional development reimbursement
  • Interest subsidy on loans - either vehicle or personal loans
Apply now!  

Together, We Make Progress

Progress is an inclusive workplace where opportunities to succeed are available to everyone. As a multicultural company serving a global community, we encourage a wide range of points of view and celebrate our diverse backgrounds. Our unique combination of perspectives inspires innovation, connects us to our customers and positively affects our communities. It is only by working together and learning from each other that we make Progress. Join us!

See more jobs at Progress

Apply for this job

+30d

Senior Data Scientist - Focus On Purchasing Department (27390)

Bosch GroupJoinville, Brazil, Remote
MLRustDesignazurekubernetesjenkinspython

Bosch Group is hiring a Remote Senior Data Scientist - Focus On Purchasing Department (27390)

Descrição da vaga

Main responsibilities ::

  • To promote innovation with Artificial Intelligence and Machine Learning (AI/ML), actively participating in business ideations.
  • To assist in the design, architecture and implementation of data streaming pipelines with focus on Purchasing Department Context at a global level.
  • To support, ideate and lead the implementation of solutions.
  • To develop solutions, tools and components to effectively integrate machine learning and artificial intelligence models, systems and workflows.
  • To develop and implement micro-services aimed at data streaming.
  • To actively participate in the delivery of high-quality data products and models, ensuring they are well documented, patterned and understandable.
  • To work in a multifunctional environment with national and international connections.
  • To implement latest technologies including AI and ML based, in the areas of data analysis, decision making, project execution and project tracking.

Qualificações

  • Higher education in Computer Science, Engineering, Mathematics, Statistics and related areas.
  • Solid knowledge and experience in Artificial Intelligence and Machine Learning.
  • Solid knowledge in the Modeling Life Cycle.
  • Knowledge in streaming & storage architectures for large volumes of data (e.g. Messaging, Kafka, Hadoop HDFS);
  • Knowledge in software engineering / development, edge & cloud computing, serverless and microservices architecture;
  • Experience with innovative tools and approaches such as Azure ML, SageMaker, Vertex AI, as well as auxiliary tools Kubernetes, Jenkins, MLFlow, Spark, Github Actions, among others.
  • Proficiency in a programming language, Python being mandatory.
  • Experience in developing and implementing APIs.
  • Skill in software development, understanding architectural patterns and programming paradigms.
  • Experience in preparing analyzes and scenarios based on extracting data of systems, to define a strategy and direct the route to follow for achieve previously agreed objectives.
  • Advanced / Fluent English.

What makes you stand ou

  • Familiarity with the Azure AI/ML platform.
  • Experience in Rust programming language.
  • Experience in Data Driven culture.
  • Ability to think from customer perspective and visualize the big picture

See more jobs at Bosch Group

Apply for this job

+30d

Senior Engineering Manager, Reporting

GustoDenver, CO;San Francisco, CA;New York, NY;Chicago, IL;Los Angeles, CA;Miami, FL;Toronto, Ontario, CAN - Remote
MLtableausqlsalesforceDesignAWS

Gusto is hiring a Remote Senior Engineering Manager, Reporting

 


About Gusto

Gusto is a modern, online people platform that helps small businesses take care of their teams. On top of full-service payroll, Gusto offers health insurance, 401(k)s, expert HR, and team management tools. Today, Gusto offices in Denver, San Francisco, and New York serve more than 300,000 businesses nationwide.

Our mission is to create a world where work empowers a better life, and it starts right here at Gusto. That’s why we’re committed to building a collaborative and inclusive workplace, both physically and virtually. Learn more about ourTotal Rewards philosophy

About the Role:

Gusto is seeking a Senior Engineering Leader to drive and execute the cross-functional strategy for our User-Facing Reporting and Analytics team. In this role, you will be responsible for managing user reporting, analytics, and providing users with the data they need in the format they require across the platform.

About the Team:

This team plays a critical role in ensuring the ease of use, efficiency and quality of data, reports rendered to meet customer needs. Their primary focus is on designing, implementing, and maintaining a robust reporting platform.

Here’s what you’ll do day-to-day:

  • Lead Gusto’s Reporting engineering team, including the recruitment, hiring, and empowerment of a world-class team
  • In this position, you will be responsible for building scalable user facing reports and dashboard platform while  maintaining our data infrastructure, ensuring data quality, integrity, and accuracy
  • You will work cross-functionally with engineering, product, design, and stakeholder teams to lead various initiatives to advance our data and reporting solutions
  • You will leverage our data to identify key insights and create operational efficiencies, as well as produce accurate and meaningful analysis to drive business decisions
  • Build, develop, and maintain data models, automated reports, and dashboards using BI/Dashboarding tools such as PowerBI, ServiceNow, Tableau, and Salesforce
  • Meet with stakeholders from business and engineering to understand OKR requirements and convert them into analytical solutions
  • Analyze datasets to discover meaningful patterns, trends, and relationships to improve operational and productivity across our organization
  • Refine, enhance, and automate processes and reports by managing tables/views and data pipelines
  • Responsible for driving the development of product and core platform in a fast paced, start-up environment
  • Providing technical leadership in defining the product and platform solutions
  • Understand customer pain points, devise solutions, and then iterate through prototyping and frequent, successful product launches.
  • Work cross functionally with design leadership and product management leadership to build and drive Gusto’s access and workflow product vision and strategy
  • Empowering a team of senior engineers, operating in architects capacity and also engineering people empowerers (managers, creating alignment, clarity, high morale, driving quick but informed decision making, and helping them get unblocked

Here’s what we're looking for:

  • 10+ years of software engineering experience
  • 5+ years of managing highly distributed engineering teams, and at least 2 years leading other engineering managers 
  • 6+ years of experience working with analytical tools and languages such as SQL,PowerBI/Tableau, Suplari, Coupa Analytics or Databricks
  • Experience with ELT data modeling and building analytical narratives using data visualizations
  • Experience with MLOps tooling such as KubeFlow, AWS Sagemaker, MLFlow, or other ML Ops tools
  • Experience in scaling engineering organizations with a focus on individual and team development
  • Ability to balance business needs, development for multiple product lines, and shipping high quality solutions
  • Experience in highly cross-functional environments for highly complex products preferred
  • Strong technical acumen with the ability to understand and debate tradeoffs and approaches to building scalable architecture and data models A high standard, systems-first approach, and strong point of view for the user experience informed by customer empathy, research, data, and customer support insights
  • Strong product opinions, including the ability to tell a story about where we’re going that is clear and inspiring, and work with teams to map dependencies, risks, and tradeoffs to increase velocity and make the vision real
  • Sharp skills as a builder — even as you’ve become a strategic leader, you are fundamentally a builder who can jump into problem solving with team members when they need the support

Our cash compensation amount for this role is targeted at $191,000-$237,000 in Denver, $208,000-$258,000 in most remote locations, and $225,000-$279,000 for San Francisco & New York. Final offer amounts are determined by multiple factors including candidate experience and expertise and may vary from the amounts listed above.


Gusto has physical office spaces in Denver, San Francisco, and New York City. Employees who are based in those locations will be expected to work from the office on designated days approximately 2-3 daysper week (or more depending on role). The same office expectations apply to all Symmetry roles, Gusto's subsidiary, whose physical office is in Scottsdale.

Note: The San Francisco office expectations encompass both the San Francisco and San Jose metro areas. 

When approved to work from a location other than a Gusto office, a secure, reliable, and consistent internet connection is required.


Our customers come from all walks of life and so do we. We hire great people from a wide variety of backgrounds, not just because it's the right thing to do, but because it makes our company stronger. If you share our values and our enthusiasm for small businesses, you will find a home at Gusto. 

Gusto is proud to be an equal opportunity employer. We do not discriminate in hiring or any employment decision based on race, color, religion, national origin, age, sex (including pregnancy, childbirth, or related medical conditions), marital status, ancestry, physical or mental disability, genetic information, veteran status, gender identity or expression, sexual orientation, or other applicable legally protected characteristic. Gusto considers qualified applicants with criminal histories, consistent with applicable federal, state and local law. Gusto is also committed to providing reasonable accommodations for qualified individuals with disabilities and disabled veterans in our job application procedures. If you require assistance in filling out a Gusto job application, please reach out to candidate-accommodations@gusto.com.

See more jobs at Gusto

Apply for this job

+30d

Senior Data Scientist

carwowLondon,England,United Kingdom, Remote Hybrid
MLagileDesigngitpython

carwow is hiring a Remote Senior Data Scientist

THE CARWOW GROUP

Carwow Group is driven by a passion for getting people into cars. But not just any car, the right car. That’s why we are building the go-to destination for car-changing. Designed to reach drivers everywhere with our trail-blazing portfolio of personality rich automotive brands; Carwow, Auto Express, evo, Driving Electric and Car Buyer.

What started as a simple reviews site, is now one of the largest online car-changing destinations in Europe - over 10m customers have used Carwow to help them buy and sell cars since its inception. Last year we grew over 50% with nearly £3bn worth of cars bought on site, while £1.8bn of cars were listed for sale through our Sell My Car service. 

In 2024 we went big and acquired Autovia, doubling our audience overnight. Together we now have one of the biggest YouTube channels in the world with over 1.1 billion annual views, sell 1.2 million print copies of our magazines and have an annual web content reach over 350million.

WHY JOIN US?

We are winners of the prestigious Culture 100 award that recognises the most loved and happiest tech companies to work for! We have just raised $52m in funding led by global venture capital firm Bessemer Venture Partners (an early backer of LinkedIn and Shopify) to accelerate our growth plans!

As pioneers, we’re always driving for new territory and positive change, so our work as a group is never done. Where others see difficulty, it’s our responsibility to see possibility – building new experiences, launching new titles and listening to drivers.

Being a part of Carwow Group means championing drivers and the automotive industry, acting as a disrupter and never being afraid to fail (but learning fast when we do!).

Our team of 500 employees across the UK, Germany, Spain and Portugal are revolutionising car-changing and we are fast expanding our mission across every single brand and country we operate in, so jump in! 

THE ROLE

We are seeking an experienced and highly motivated Senior Data Scientist to join the Analytics & Data Science team. As a core member of our rapidly growing machine learning function within the business, you will be responsible for developing and implementing novel machine learning solutions that enhance and accelerate our key business operations. You will collaborate closely with stakeholders from Commercial, Operations, and Marketplace teams to understand their requirements, develop domain knowledge and a deep understanding of our data, then create robust machine learning models to drive key business objectives. This is an exciting opportunity to apply your expertise in machine learning to a diverse range of business applications in a successful and rapidly growing tech scale-up. 

WHAT YOU’LL DO

  • Collaborate with cross-functional teams including Commercial, Operations, and Marketplace to understand their specific business challenges
  • Design and develop machine learning solutions for stakeholders that provide efficient and reliable solutions to business operations needs
  • Focus on solutions that drive key business performance by identifying core drivers of success and developing models focused on enhancing them
  • Effectively communicate the function and purpose of ML solutions to stakeholders with varying degrees of technical complexity
  • Collaborate within the analytics team to understand how to best complement and enhance analytics projects and thereby enable wider team success
  • Proactively identify opportunities for ML solutions in the broader business

WHAT YOU’LL NEED

Please note: We know that no candidate will be the perfect match for all we've listed in this posting, so we’d encourage you to apply if you feel you're close to the brief but not an exact match. Ideally you’ll have

  • Extensive Machine Learning Experience: Demonstrated track record of building machine learning (ML) models in Python that translate data into actionable insights.
  • Passion for Problem Solving: A relentless curiosity for understanding the factors that drive behaviours and trends in data. An analytical and investigative mindset that converges on efficient ways to extract insight from data.
  • Technical Expertise: solid experience developing ML solutions in a cloud environment (e.g., Vertex AI, Sagemaker); understanding of software engineering principles including version control (Git), code reviews, agile methodology, unit tests; and (desirable) familiarity with containerisation.
  • Stakeholder Management: Proven ability to work closely with stakeholders, actively listening to their needs, and translating those needs into effective ML solutions. Strong communication skills to present insights clearly and concisely.
  • Quantitative decision making: Ability to evaluate model performance with well-motivated statistical tests and success metrics, and consideration for this process in model development and experiment design..
  • Python proficiency: Fluency with core data handling packages in Python (Pandas, NumPy, Scipy) and machine learning modules (Tensorflow or Pytorch, Scikit-Learn).
  • (Desirable) MLOps Experience: Familiarity with the ML production life cycle, including model training, monitoring, versioning, and model experimentation (champion vs. challenger).
  • (Nice to have) LLM Experience: development of LLM-powered solutions that add business value, comparative model evaluation, expertise with prompt engineering and LLM concepts (chain-of-thought, RAG, custom agents) 

INTERVIEW PROCESS 

  • Step 1: Hiring Manager Interview
  • Step 2:  Values and Experience
  • Step 3: Technical Task with Presentation

WHAT’S IN IT FOR YOU

  • Hybrid working, with two days a week in the London office 
  • Competitive salary to fund that dream holiday to Bali
  • Matched pension contributions for a peaceful retirement
  • Share options - when we thrive, so do you!
  • Vitality Private Healthcare, for peace of mind, plus eyecare vouchers
  • Life Assurance for (even more) peace of mind
  • Monthly coaching sessions with Spill - our mental wellbeing partner
  • Enhanced holiday package, plus Bank Holidays 
    • 28 days annual leave
    • 1 day for your wedding
    • 1 day off when you move house - because moving is hard enough without work!
    • For your third year anniversary, get 30 days of annual leave per year
    • For your tenth year anniversary, get 35 days of annual leave per year 
    • Option to buy 3 extra days of holiday per year  
  • Work from abroad for a month
  • Inclusive parental, partner and shared parental leave, fertility treatment and pregnancy loss policies
  • Bubble childcare support and discounted nanny fees for little ones
  • The latest tech (Macbook or Surface) to power your gif-sending talents
  • Up to £500/€550 home office allowance for that massage chair you’ve been talking about
  • Generous learning and development budget to help you master your craft
  • Regular social events: tech lunches, coffee with the exec sessions, lunch 8 learns, book clubs, social events/anything else you pester us for
  • Refer a friend, get paid. Repeat for infinite money

Diversity and inclusion is an integral part of our culture. We know that diverse teams are strong teams, so we welcome those with alternative identities, backgrounds, and experiences to apply for this position. We make recruiting decisions based on experience, skills and potential, so all our applicants are treated fairly and equally. 

See more jobs at carwow

Apply for this job

+30d

Senior IA/ML Engineer (Eng/Esp)

Plain ConceptsSpain, Remote
MLagileDesign

Plain Concepts is hiring a Remote Senior IA/ML Engineer (Eng/Esp)

We are expanding our development teams and although we don’t care much about titles, we call this role Senior Data Scientist.

As an Data Scientist Engineer, you will be part our international AI/ML team developing tailored solutions to satisfy our client's needs. We are looking for a passionate engineer with background in machine/deep learning. You will be training, and deploying models, putting research into production, among other tasks.

You will be part of a multidisciplinary team, taking care of the software engineering challenges associated with AI. You will take part in challenging projects using cutting edge technologies. You will work in an international environment with the possibility of working from home or from our offices.

Our vision is to build multidisciplinary teams which directly manage projects in an AGILE way to find and implement the best solutions ????

You will be responsible for:

  • Participating in the design and development of AI solutions for challenging projects.
  • Building production level ML/AI solutions, with solid software engineering and ML/AI principles.
  • MLOps Automated deployment and monitoring (models and infrastructure).
  • Data analysis (data cleaning, variable transformation, etc.).
  • Developing and training ML models.
  • Putting AI models into production.
  • This means parallelizing, optimizing, tuning, testing the models to deploy in a production environment.

What are we looking for?

  • More than 5 years of experience in AI / Machine Learning / Computer Science.
  • Can build an “end-to-end software product” which has machine learning component.
  • Knowledges in applied computer vision.
  • Strong skills in Python and reasonable SQL understanding.
  • Experience in building ML/deep learning pipelines and models.
  • Experience in implementing production ready ML models using current ML/Deep learning techniques.
  • MLOps experience is very valuable for the team
  • Experience in projects with NLP is neccesary
  • ENGLISH IS MANDATORY

Very nice to have:

  • Experience with Deep learning frameworks like Tensorflow, PyTorch etc.
  • Experience with Big Data projects and tools. E
  • Experience writing RESTful web services.
  • Experience with DevOps technology (Docker, Continuous Integration and Continuous Deployment, etc.)
  • Experience with source code version control tools.
  • Experience working with Azure or other cloud-based solution.
  • Azure Machine Learning: Experience with Azure Machine Learning is a plus, although not essential.
  • Deep learning and neural networks.
  • Expertise in deep learning and high-level neural networks.
  • Experience with one or more of the following: Natural Language Processing, Computer Vision, recommendation systems, unsupervised learning, ranking systems or similar.
  • TensorFlow Lite, TensorFlow serving y grpc.
  • Unit testing knowledge.
  • Solid object-oriented programming skills.

What do we offer?

  • Salary determined by the market and your experience ????
  • Flexible schedule 35 Hours / Week ???? (1 year trial period)
  • Fully remote work (optional) ????
  • Flexible compensation (restaurant, transport, and childcare) ✌
  • Medical and dental insurance (completely free of charge for the employee) ????
  • Individual budget for training and free Microsoft certifications ????
  • English lessons (1 hour/week) ????
  • Birthday day off ????????
  • Monthly bonus for electricity and Internet expenses at home ????
  • Discount on gym plan and sports activities ????
  • Plain Camp (annual team-building event) ????
  • ➕ The pleasure of always working with the latest technological tools!

With all this information you already know a lot about us. Will you let us know you better?

The selection process? Simple, just 3 steps: a call and 2 interviews with the team ????

And you may wonder… Who is Plain Concepts?

Plain Concepts is made up of 400 people who are passionate about technology, driven by the change towards finding the best solutions for our customers and projects.

Throughout the years, the company has grown thanks to the great technical potential we have and relying on our craziest and most innovative ideas. We currently have over 14 offices in 6 different countries. Our main goal is to keep growing as a team, developing the best and most advanced projects in the market.

We truly believe in the importance of bringing together people from different backgrounds and countries to build the best team, with a diverse and inclusive culture.


What do we do at Plain Concepts?

We are characterized for having a DNA 100% technical. We develop customized projects from scratch, technical consultancy, and training.

  • We don’t do bodyshopping or outsourcing
  • Our teams are multidisciplinary, and the organizational structure is flat and horizontal
  • We are very committed to AGILE values
  • Sharing is caring: We help, support, and encourage each other to expand our knowledge internally and also towards the community (with conferences, events, talks…)
  • We always look for creativity and innovation, even when the idea might seem crazy to others
  • Transparency is key to any relationship

To know more about us, take a look in our website:

https://www.plainconcepts.com/case-studies/

At Plain Concepts, we certainly seek to provide equal opportunities. We want diverse applicants regardless of race, colour, gender, religion, national origin, citizenship, disability, age, sexual orientation, or any other characteristic protected by law.

See more jobs at Plain Concepts

Apply for this job

+30d

Senior Software Engineer, Component Verification

Torc RoboticsRemote, US
MLBachelor's degreesqlgitc++linuxpython

Torc Robotics is hiring a Remote Senior Software Engineer, Component Verification

About the Company

At Torc, we have always believed that autonomous vehicle technology will transform how we travel, move freight, and do business.

A leader in autonomous driving since 2007, Torc has spent over a decade commercializing our solutions with experienced partners. Now a part of the Daimler family, we are focused solely on developing software for automated trucks to transform how the world moves freight.

Join us and catapult your career with the company that helped pioneer autonomous technology, and the first AV software company with the vision to partner directly with a truck manufacturer.

Meet the team:   

In the Apps & Frameworks group our core obligation is to develop and own the production-grade automated processes of the Torc data loop. Starting with data selection, following training, evaluation, release verification and deployment, in the end, our machine learning models make our trucks perceive and steer through the world! The internal verification team focuses on recompute verification of all component teams, helping to understand the performance of our Virtual Driver automatically. To be successful, we collaborate with different component, data science, ML Ops and recompute teams across the organization.   

What you’ll do:   

  • Performance metric development for perception and behavior components in Python 
  • Managing, extending and using SQL databases with component output data 
  • Maintaining and extending a Python dashboard displaying component verification data 
  • Leveraging available (meta-)data of the Torc datalake to enrich our dashboard with helpful information for our component teams 

 

What you’ll need to succeed:   

  • Considered very skilled and proficient in discipline; conducts important work under minimal supervision and with latitude for independent judgment 
  • BS+ 4+ years of experience or MS+ 3+ years of experience. 
  • Proficient in Git and Linux 
  • Advanced Python programming skills 
  • Proficiency in SQL for database management and data analysis 
  • Domain experience in automated vehicle technology related to requirements, testing and verification 
     

Bonus Points!   

  • Experience in modern python repository and package management 
  • Experience with data visualization and analysis related to robotics/automated vehicles 

Perks of Being a Full-time Torc’r  

Torc cares about our team members and we strive to provide benefits and resources to support their health, work/life balance, and future. Our culture is collaborative, energetic, and team focused. Torc offers:      

  • A competitive compensation package that includes a bonus component and stock options    
  • 100% paid medical, dental, and vision premiums for full-time employees      
  • 401K plan with a 6% employer match    
  • Flexibility in schedule and generous paid vacation (available immediately after start date)   
  • Company-wide holiday office closures    
  • AD+D and Life Insurance 

Hiring Range for Job Opening 
US Pay Range
$160,800$193,000 USD

At Torc, we’re committed to building a diverse and inclusive workplace. We celebrate the uniqueness of our Torc’rs and do not discriminate based on race, religion, color, national origin, gender (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender identity, gender expression, age, veteran status, or disabilities.

Even if you don’t meet 100% of the qualifications listed for this opportunity, we encourage you to apply. 

See more jobs at Torc Robotics

Apply for this job

+30d

Senior Consultant | AI Solutioning (Remote)

Trace3Remote
MLSalesOpenAIDesignazurec++pythonAWS

Trace3 is hiring a Remote Senior Consultant | AI Solutioning (Remote)


Who is Trace3?

Trace3 is a leading Transformative IT Authority, providing unique technology solutions and consulting services to our clients. Equipped with elite engineering and dynamic innovation, we empower IT executives and their organizations to achieve competitive advantage through a process of Integrate, Automate, Innovate.

Our culture at Trace3 embodies the spirit of a startup with the advantage of a scalable business. Employees can grow their career and have fun while doing it!

Trace3 is headquartered in Irvine, California. We employ more than 1,200 people all over the United States. Our major field office locations include Denver, Indianapolis, Grand Rapids, Lexington, Los Angeles, Louisville, Texas, San Francisco.  

Ready to discover the possibilities that live in technology?

 

Come Join Us!

Street-Smart Thriving in Dynamic Times

We are flexible and resilient in a fast-changing environment. We continuously innovate and drive constructive change while keeping a focus on the “big picture.” We exercise sound business judgment in making high-quality decisions in a timely and cost-effective manner. We are highly creative and can dig deep within ourselves to find positive solutions to different problems.

Juice - The “Stuff” it takes to be a Needle Mover

We get things done and drive results. We lead without a title, empowering others through a can-do attitude. We look forward to the goal, mentally mapping out every checkpoint on the pathway to success, and visualizing what the final destination looks and feels like.

Teamwork - Humble, Hungry and Smart

We are humble individuals who understand how our job impacts the company's mission. We treat others with respect, admit mistakes, give credit where it’s due and demonstrate transparency. We “bring the weather” by exhibiting positive leadership and solution-focused thinking. We hug people in their trials, struggles, and failures – not just their success. We appreciate the individuality of the people around us.


 

About the Role:

The Sr. Consultant of AI Solutioning will be instrumental in delivering exceptional AI solutions to clients. Reporting to the Director of AI Solutioning, this role involves active participation in client delivery and support for presales activities. Leveraging deep expertise in AI solution design, development, and deployment, the Sr. Consultant will serve as a subject matter expert and thought leader in the AI solutioning domain. In collaboration with the Director of AI Solutioning and other capability leads, responsibilities include developing and refining methodologies and frameworks, contributing to business development and proposal activities, and ensuring high-quality delivery across multiple clients. The primary focus will be leading delivery teams and advancing the overall growth and development of AI solutioning capabilities.

What You’ll Do:

I.  AI Capabilities Development

  • Support developing and refinement of methodologies for AI requirement gathering, solution design, and development
  • Create reusable AI solutions capabilities for common industry challenges
  • Develop AI solutions to address key client challenges, including AI monitoring and observability, and security operations for machine learning models (ML LLM/Sec Ops)
  • Contribute to the development of Trace3’s AI Solutioning product and service portfolio, aligning with market demands and company strategy

II.  Teamwork

  • Coordinate approach with the Directo of AI Solutioning to ensure AI solutioning aligns with Trace3 business objectives and growth strategies
  • Collaborate with cross-functional teams to align AI solutioning with overall product and services strategy
  • Support sales/account teams by providing expertise in AI solution design and development during sales pursuits and client engagement

III.  Talent Development

  • Mentor and lead a team of AI architects, strategists and product managers, ensuring successful project delivery and continuous development
  • Contribute to the continuous learning and development of internal talent

IV.  AI Capabilities Sales and Delivery

  • Work closely with clients to understand their challenges, gather and analyze data, formulate hypotheses, and define actionable plans to solutions.
  • Ensure the quality and consistency of AI solution delivery across multiple clients
  • Guide clients through technology/data selection, leveraging expertise in Azure OpenAI, Snowflake, Databricks, LLMs and related AI tools
  • Lead client engagements, from initial solution design through to solution deployment
  • Lead resources directly and through cross functional teams; exhibit strong leadership to drive high performing teams
  • Manage the scope, requirements, resources and budget for all assigned projects
  • Continuously monitor and create transparency through the communication of project status, issues and risks to internal teams and our client

V.  Thought Leadership

  • Stay updated on latest AI trends, technologies, and best practices to inform strategy development and implementation.

 

Qualifications & Interests:

  • Bachelor’s degree required, advanced degree/certifications preferred.
  • 5+ years’ experience leading technical consulting engagements including focuses on data science, data transformation, AI/data architectures, advanced analytics, or similar platform implementation
  • 2+ years’ experience in solution design and development, with a strong track record of successful client engagements
  • Relevant AI/ML certifications preferred but not required (e.g., Azure Machine Learning Specialty, Google Cloud Professional Machine Learning Engineer, etc.)
  • Certifications in data platforms, data science, AI, statistical analysis and/or related areas by reputable organizations
  • Familiarity with various AI and Machine Learning technologies, methods, tools (e.g., GitHub, PyTorch, TensorFlow, etc.), cloud platforms (e.g., AWS, Azure, GCP), programming languages (Python, R, etc.), and data architecture
  • Extensive knowledge of data analysis, statistical modeling, consultative frameworks, project management methodologies, and technical implementation
  • Up-to-date understanding of AI industry trends and emerging technologies
  • Highly organized, detail-oriented, excellent time management skills and able to effectively prioritize tasks in a fast-paced, high-volume, and evolving work environment
  • Ability to approach customer requests with a proactive and consultative manner; listen and understand user requests and needs and effectively deliver
  • Strong influencing skills to get things done and inspire business transformation
  • Excellent oral, written communication and presentation skills with an ability to present security related concepts to C-Level Executives and non-technical audience
  • Conflict negotiation and problem-solving skills and agility
Actual salary will be based on a variety of factors, including location, experience, skill set, performance, licensure and certification, and business needs. The range for this position in other geographic locations may differ. Certain positions may also be eligible for variable incentive compensation, such as bonuses or commissions, that is not included in the base salary.
Estimated Pay Range
$99,400$149,200 USD

The Perks:

  • Comprehensive medical, dental and vision plans for you and your dependents
  • 401(k) Retirement Plan with Employer Match, 529 College Savings Plan, Health Savings Account, Life Insurance, and Long-Term Disability
  • Competitive Compensation
  • Training and development programs
  • Stocked kitchen with snacks and beverages
  • Collaborative and cool culture
  • Work-life balance and generous paid time off

 

***To all recruitment agencies: Trace3 does not accept unsolicited agency resumes/CVs. Please do not forward resumes/CVs to our careers email addresses, Trace3 employees or any other company location. Trace3 is not responsible for any fees related to unsolicited resumes/CVs.

See more jobs at Trace3

Apply for this job

+30d

Senior AI Infra Engineer, AI/ML and Data Infrastructure

Chan Zuckerberg InitiativeRedwood City, CA (Open to Remote)
MLRustscalaairflowDesignazurerubyjavac++kuberneteslinuxpythonAWSPHP

Chan Zuckerberg Initiative is hiring a Remote Senior AI Infra Engineer, AI/ML and Data Infrastructure

The Chan Zuckerberg Initiative was founded by Priscilla Chan and Mark Zuckerberg in 2015 to help solve some of society’s toughest challenges — from eradicating disease and improving education to addressing the needs of our local communities. Our mission is to build a more inclusive, just, and healthy future for everyone.

The Team

Across our work in Science, Education, and within our communities, we pair technology with grantmaking, impact investing, and collaboration to help accelerate the pace of progress toward our mission. Our Central Operations & Partners team provides the support needed to push this work forward. 

Central Operations & Partners consists of our Brand & Communications, Community, Facilities, Finance, Infrastructure/IT Operations/Business Systems, Initiative Operations, People, Real Estate/Workplace/Facilities/Security, Research & Learning, and Ventures teams. These teams provide the essential operations, services, and strategies needed to support CZI’s progress toward achieving its mission to build a better future for everyone.

The Opportunity

By pairing engineers with leaders in our science and education teams, we can bring AI/ML technology to the table in new ways to help drive solutions. We are uniquely positioned to design, build, and scale software systems to help educators, scientists, and policy experts better address the myriad challenges they face. Our technology team is already helping schools bring personalized learning tools to teachers and schools across the country. We are also supporting scientists around the world as they develop a comprehensive reference atlas of all cells in the human body, and are developing the capacity to apply state-of-the-art methods in artificial intelligence and machine learning to solve important problems in the biomedical sciences. 

The AI/ML and Data Engineering Infrastructure organization works on building shared tools and platforms to be used across all of the Chan Zuckerberg Initiative, partnering and supporting the work of a wide range of Research Scientists, Data Scientists, AI Research Scientists, as well as a broad range of Engineers focusing on Education and Science domain problems. Members of the shared infrastructure engineering team have an impact on all of CZI's initiatives by enabling the technology solutions used by other engineering teams at CZI to scale. A person in this role will build these technology solutions and help to cultivate a culture of shared best practices and knowledge around core engineering.

We are building a world-class shared services model, and being based in New York helps us achieve our service goals. We require all interested candidates to be based out of New York City and available to work onsite 2-3 days a week.

What You'll Do

  • Participate in the  technical design and building of efficient, stable, performant, scalable and secure AI/ML and Data infrastructure engineering solutions.
  • Active hands-on coding working on our Deep Learning and Machine Learning models
  • Design and implement complex systems integrating with our large scale AI/ML GPU compute infrastructure and platform, making working across multiple clouds easier and convenient for our Research Engineers, ML Engineers, and Data Scientists. 
  • Use your solid experience and skills in building containerized applications and infrastructure using Kubernetes in support of our large scale GPU Research cluster as well as working on our various heterogeneous and distributed AI/ML environments.  
  • Collaborate with other team members in the design and build of our Cloud based AI/ML platform solutions, which includes Databricks Spark, Weaviate Vector Databases, and supporting our hosted Cloud GPU Compute services running containerized PyTorch on large scale Kubernetes.
  • Collaborate with our partners on data management solutions in our heterogeneous collection of complex datasets.
  • Help build tooling that makes optimal use of our shared infrastructure in empowering  our AI/ML efforts with world class GPU Compute Cluster and other compute environments such as our AWS based services.

What You'll Bring

  • BS or MS degree in Computer Science or a related technical discipline or equivalent experience
  • 5+ years of relevant coding experience
  • 3+ years of systems Architecture and Design experience, with a broad range of experience across Data, AI/ML, Core Infrastructure, and Security Engineering
  • Scaling containerized applications  on Kubernetes or Mesos, including expertise  with creating custom containers using secure AMIs and continuous deployment systems that integrate with Kubernetes or Mesos. (Kubernetes preferred)
  • Proficiency with Amazon Web Services (AWS), Google Cloud Platform (GCP), or Microsoft Azure, and experience with On-Prem and Colocation Service hosting environments
  • Proven coding ability with a systems language such as Rust,C/ C++, C#, Go, Java, or Scala
  • Shown ability with a scripting language such as Python, PHP, or Ruby
  • AI/ML Platform Operations experience in an environment with challenging data and systems platform challenges - including large scale Kafka and Spark deployments (or their coralaries such as Pulsar, Flink, and/or Ray) as well as Workflow scheduling tools such as Apache Airflow, Dagster, or Apache Beam  
  • MLOps experience working with medium to large scale GPU clusters in Kubernetes (Kubeflow),  HPC environments, or large scale Cloud based ML deployments
  • Working knowledge of Nvidia CUDA and AI/ML custom libraries.  
  • Knowledge of Linux systems optimization and administration
  • Understanding of Data Engineering, Data Governance, Data Infrastructure, and AI/ML execution platforms.
  • PyTorch, Karas, or Tensorflow  experience a strong nice to have
  • HPC with and Slurm experience a strong nice to have

Compensation

The Redwood City, CA base pay range for this role is $190,000 - $285,000. New hires are typically hired into the lower portion of the range, enabling employee growth in the range over time. Actual placement in range is based on job-related skills and experience, as evaluated throughout the interview process. Pay ranges outside Redwood City are adjusted based on cost of labor in each respective geographical market. Your recruiter can share more about the specific pay range for your location during the hiring process.

Benefits for the Whole You 

We’re thankful to have an incredible team behind our work. To honor their commitment, we offer a wide range of benefits to support the people who make all we do possible. 

  • CZI provides a generous employer match on employee 401(k) contributions to support planning for the future.
  • Annual benefit for employees that can be used most meaningfully for them and their families, such as housing, student loan repayment, childcare, commuter costs, or other life needs.
  • CZI Life of Service Gifts are awarded to employees to “live the mission” and support the causes closest to them.
  • Paid time off to volunteer at an organization of your choice. 
  • Funding for select family-forming benefits. 
  • Relocation support for employees who need assistance moving to the Bay Area
  • And more!

Commitment to Diversity

We believe that the strongest teams and best thinking are defined by the diversity of voices at the table. We are committed to fair treatment and equal access to opportunity for all CZI team members and to maintaining a workplace where everyone feels welcomed, respected, supported, and valued. Learn about our diversity, equity, and inclusion efforts. 

If you’re interested in a role but your previous experience doesn’t perfectly align with each qualification in the job description, we still encourage you to apply as you may be the perfect fit for this or another role.

Explore our work modesbenefits, and interview process at www.chanzuckerberg.com/careers.

#LI-Remote 

 

Facebook X Instagram Linkedin Medium  YouTube
 

See more jobs at Chan Zuckerberg Initiative

Apply for this job

+30d

Staff Data & Machine Learning Engineer

CelonisRemote, Germany
MLsqlDesigndockerpythonbackendfrontend

Celonis is hiring a Remote Staff Data & Machine Learning Engineer

We're Celonis, the global leader in Process Mining technology and one of the world's fastest-growing SaaS firms. We believe there is a massive opportunity to unlock productivity by placing data and intelligence at the core of business processes - and for that, we need you to join us.

The Team:

Our team is responsible for building the Celonis’end-to-end Task Mining solution. Task Mining is the technology that allows businesses to capture user interaction (desktop) data, so they can analyze how teams get work done, and how they can do it even better. We own all the related components, e.g. the desktop client, the related backend services, the data processing capabilities, and Studio frontend applications.

The Role:

Celonis is looking for a Staff Data & Machine Learning Engineer to improve and extend our existing Task Mining ETL pipeline as well as build production ready AI based features into the Task Mining product. You will be owning the solution to simplify the extraction of insights from task mining data. This role demands a blend of expertise in data engineering, software development, and machine learning, utilizing Python


The work you’ll do:

  • Design, build, and maintain robust, scalable data pipelines that facilitate the ingestion, processing, and transformation of large datasets
  • Drive the development of AI-powered features and applications from scratch within the Task Mining product
  • Implement data strategies and develop data models
  • Collaborate with other engineering teams to implement, deploy, and monitor ML models in production, ensuring their performance and accuracy
  • Leverage machine learning techniques to provide actionable insights and recommendations for process optimization
  • Write performant, scalable and easy to understand SQL queries and optimize existing ones
  • Learn PQL (Process Query Language – Celonis’ own language for analytical formulas and expressions) and use it to query data from our process mining engine
  • Own the implementation of end to end solutions: leading the design, implementation, build and delivery to customers
  • Provide technical leadership and mentorship to other engineers and team members
  • Lead design discussions, code reviews, and technical planning sessions to ensure high standards and knowledge sharing


The qualifications you need:

  • 8+ years of practical experience in a Computer Science/Data Science related field
  • Or PhD in Data Science/AI/ML area with 5+ years of practical experience 
  • Experience with building production ready and scalable AI/ML applications in the python ecosystem
  • Ability to optimize data pipelines, applications, and machine learning models for high performance and scalability
  • Understanding of ETL jobs, data warehouses/lakes, data modeling, schema design
  • Excellent command of SQL, including query optimization principles
  • Ability to assess dependencies within complex systems, quickly transform your thoughts into an accessible prototype and efficiently explain it to diverse stakeholders
  • Experience with containerization and CI/CD pipelines (e.g. Docker, Github Actions)
  • Interest in learning new technologies (e.g. PQL language and Object Centric Process Mining)
  • Strong communication and collaboration skills (English is a must)
  • Able to supervise and coach mid-level and senior colleagues
  • Knowledge of Column-oriented DBMS (e.g. Vertica) and its specific features would be beneficial
  • Nice to have is knowledge in the frameworks Tensorflow, Pytorch, Langchain, FastAPI, SQLAlchemy 

What Celonis can offer you:

  • The unique opportunity to work with industry-leading process mining technology
  • Investment in your personal growth and skill development (clear career paths, internal mobility opportunities, L&D platform, mentorships, and more)
  • Great compensation and benefits packages (equity (restricted stock units), life insurance, time off, generous leave for new parents from day one, and more). For intern and working student benefits, click here.
  • Physical and mental well-being support (subsidized gym membership, access to counseling, virtual events on well-being topics, and more)
  • A global and growing team of Celonauts from diverse backgrounds to learn from and work with
  • An open-minded culture with innovative, autonomous teams
  • Business Resource Groups to help you feel connected, valued and seen (Black@Celonis, Women@Celonis, Parents@Celonis, Pride@Celonis, Resilience@Celonis, and more)
  • A clear set of company values that guide everything we do: Live for Customer Value, The Best Team Wins, We Own It, and Earth Is Our Future

About Us

Since 2011, Celonis has helped thousands of the world's largest and most valued companies deliver immediate cash impact, radically improve customer experience and reduce carbon emissions. Its Process Intelligence platform uses industry-leading process mining technology and AI to present companies with a living digital twin of their end-to-end processes. For the first time, everyone in an organisation has a common language about how the business works, visibility into where value is hidden and the ability to capture it. Celonis is headquartered in Munich (Germany) and New York (USA) and has more than 20 offices worldwide.

Get familiar with the Celonis Process Intelligence Platform by watching this video.

Join us as we make processes work for people, companies and the planet.

 

Celonis is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. Different makes us better.

Accessibility and Candidate Notices

See more jobs at Celonis

Apply for this job

Tiger Analytics is hiring a Remote Senior Manager/ Associate Director - Analytics Consulting(Healthcare)

Tiger Analytics is pioneering what AI and analytics can do to solve some of the toughest problems faced by organizations globally. We develop bespoke solutions powered by data and technology for several Fortune 100 companies. We have offices in multiple cities across the US, UK, India, and Singapore, and a substantial remote global workforce.

If you are passionate about working on business problems that can be solved using structured and unstructured data on a large scale, Tiger Analytics would like to talk to you. Now hiring for multiple opportunities in Technology Consulting and Solution Delivery.

Responsibilities

Requirement gathering

  • Drive discussions with business and internal stakeholders to understand client requirements
  • Co-own and lead the planning, development, documentation, and day-to-day management of Data Science and AI solutions products prioritizing client’s objectives and business goals.
  • Responsible for managing the account and building client relationships to ensure account growth.
  • Identifying the right opportunities for Tiger, and playing a key role to see them to completion

Scoping & solutioning

  • Ability to collaborate with Technical SMEs such as Data Scientists, Data Engineers and ML Engineers from Tiger.
  • Based on client requirements, detail out the scope of work and devise high-level task breakdown and timelines

  • 10-15 years of professional work experience in data analytics and leading multiple projects and client stakeholders.
  • Experience in analytics consulting is a must.
  • At least 5-7 years of experience in the Healthcare/Health Insurance space.
  • Serve as the primary point of contact for the Company, managing program, to ensure integrated efforts in HEDIS quality improvement.
  • Identify and implement improvements to analytics workflows and processes to enhance efficiency and effectiveness in HEDIS reporting.
  • Ensure all analytical activities adhere to HEDIS guidelines, regulatory requirements, and industry standards.
  • Ability to engage with executive/VP-level stakeholders from the client’s team to translate business problems into high-level analytics solution approaches.
  • A solid understanding of statistical and machine-learning algorithms is a plus.
  • Strong SQL skills and hands-on experience with analytic tools like R & Python; & visualization tools like Tableau & Power BI.

Significant career development opportunities exist as the company grows. The position offers a unique opportunity to be part of a small, fast-growing, challenging, and entrepreneurial environment, with a high degree of individual responsibility.

See more jobs at Tiger Analytics

Apply for this job

+30d

Director | AI Solutioning (Remote)

Trace3Remote
MLSalesOpenAIAbility to travelDesignazurec++pythonAWS

Trace3 is hiring a Remote Director | AI Solutioning (Remote)


Who is Trace3?

Trace3 is a leading Transformative IT Authority, providing unique technology solutions and consulting services to our clients. Equipped with elite engineering and dynamic innovation, we empower IT executives and their organizations to achieve competitive advantage through a process of Integrate, Automate, Innovate.

Our culture at Trace3 embodies the spirit of a startup with the advantage of a scalable business. Employees can grow their career and have fun while doing it!

Trace3 is headquartered in Irvine, California. We employ more than 1,200 people all over the United States. Our major field office locations include Denver, Indianapolis, Grand Rapids, Lexington, Los Angeles, Louisville, Texas, San Francisco.  

Ready to discover the possibilities that live in technology?

 

Come Join Us!

Street-Smart Thriving in Dynamic Times

We are flexible and resilient in a fast-changing environment. We continuously innovate and drive constructive change while keeping a focus on the “big picture.” We exercise sound business judgment in making high-quality decisions in a timely and cost-effective manner. We are highly creative and can dig deep within ourselves to find positive solutions to different problems.

Juice - The “Stuff” it takes to be a Needle Mover

We get things done and drive results. We lead without a title, empowering others through a can-do attitude. We look forward to the goal, mentally mapping out every checkpoint on the pathway to success, and visualizing what the final destination looks and feels like.

Teamwork - Humble, Hungry and Smart

We are humble individuals who understand how our job impacts the company's mission. We treat others with respect, admit mistakes, give credit where it’s due and demonstrate transparency. We “bring the weather” by exhibiting positive leadership and solution-focused thinking. We hug people in their trials, struggles, and failures – not just their success. We appreciate the individuality of the people around us.


 

About the Role:

The Director of AI Solutioning will be an integral member of the AI Leadership Team, responsible for shaping and executing Trace3’s AI product strategy. This role will focus on defining and delivering innovative AI service offerings, while also providing strategic direction and technical expertise in AI solutions design and implementation. Key responsibilities include collaborating with clients’ AI leaders and executives to develop and execute AI strategies that address prioritized use cases. The Director will lead the definition and development of AI solutions, ensuring they align with both client needs and company objectives. This role demands a strong blend of consultative skills, technical proficiency, and business acumen to effectively navigate the rapidly evolving AI landscape and drive significant value for the Trace3 and our clients.

What You’ll Do:

I.  AI Capabilities Development

  • Develop and maintain annual capabilities roadmap and resource plan
  • Create frameworks for translating business problems into AI-solvable use cases
  • Develop and refine methodologies for AI requirement gathering, solution design, and development
  • Oversee the creation of reusable AI solutions capabilities for common industry challenges
  • Develop solutions to address key AI challenges, including AI monitoring and observability, and security operations for machine learning models (ML LLM/Sec Ops)
  • Develop and maintain AI Solutioning product and service portfolio, aligning with market demands and company strategy

II. Teamwork

  • Collaborate with executives to ensure AI solutioning aligns with Trace3 business objectives and growth strategies
  • Coordinate with other Product Leaders to align AI solutioning with overall product strategy
  • Participate in cross-product strategic pursuit team to evaluate and pursue significant revenue generating opportunities
  • Support sales/account teams by providing expertise in AI solution design and development during sales pursuits and client engagements

III.  Talent Development

  • Support talent management efforts to attract, develop and retain highly competitive AI talent
  • Mentor and lead a team of AI strategists, architects and product managers, ensuring successful project delivery and continuous development

IV.  AI Capabilities Sales and Delivery

  • Partner with clients to align our Trace3 capabilities and offerings to their needs, develop and tailor proposals for clients.
  • Drive sales and GTM activities around the AI Solutioning space working with our sales leaders and account teams.
  • Act as project director and subject matter expert on strategic client projects and sales pursuits.
  • Guide clients through technology/data selection, leveraging expertise in Azure OpenAI, Snowflake, Databricks, LLMs and related AI tools

V.  Thought Leadership

  • Stay updated on latest AI trends, technologies, and best practices to inform strategy development and implementation.
  • Drive thought leadership initiatives through white papers, presentations, and industry events

 

Qualifications & Interests:

  • Master’s degree/Ph.D. focus on AI, Machine Learning, Statistics, Economics, or Business Administration, strongly preferred
  • 10+ years’ experience including consulting leadership with focus areas including data/AI strategy, data science, advanced analytics, and data ecosystem transformations
  • 5+ years’ experience in solution design and development, with a strong track record of successful client engagements
  • 5+ years of product management experience in AI, Advanced Analytics, or digital transformation
  • (Preferred) Relevant AI/ML certifications (e.g., Azure Machine Learning Specialty, Google Cloud Professional Machine Learning Engineer, etc.)
  • Certifications in data science, AI, statistical analysis and/or related areas by reputable organizations
  • Extensive knowledge of data analysis, statistical modeling
  • Deep understanding of AI and Machine Learning technologies, methods, tools (e.g., GitHub, PyTorch, TensorFlow, etc.), cloud platforms (e.g., AWS, Azure, GCP), programming languages (Python, R, etc.), and data architecture
  • Up-to-date understanding of AI industry trends and emerging technologies
  • Proven track record in bringing revenue-generating AI solutions to market
  • Experience in a consulting or solutions-oriented environment
  • Demonstrated track record of thought leadership content creation
  • Strong financial and business acumen with understanding of a multi-faceted business operation
  • Strong influencing skills to get things done and inspire business transformation
  • Ability to approach customer and sales requests with a proactive and consultative manner; listen and understand user requests and needs and effectively deliver
  • Excellent oral, written communication and presentation skills with an ability to present security related concepts to C-Level Executives and non-technical audience
  • Conflict negotiation and problem-solving skills and agility
  • Ability to travel when needed

 

 

Actual salary will be based on a variety of factors, including location, experience, skill set, performance, licensure and certification, and business needs. The range for this position in other geographic locations may differ. Certain positions may also be eligible for variable incentive compensation, such as bonuses or commissions, that is not included in the base salary.
Estimated Pay Range
$212,200$255,200 USD

The Perks:

  • Comprehensive medical, dental and vision plans for you and your dependents
  • 401(k) Retirement Plan with Employer Match, 529 College Savings Plan, Health Savings Account, Life Insurance, and Long-Term Disability
  • Competitive Compensation
  • Training and development programs
  • Stocked kitchen with snacks and beverages
  • Collaborative and cool culture
  • Work-life balance and generous paid time off

 

***To all recruitment agencies: Trace3 does not accept unsolicited agency resumes/CVs. Please do not forward resumes/CVs to our careers email addresses, Trace3 employees or any other company location. Trace3 is not responsible for any fees related to unsolicited resumes/CVs.

See more jobs at Trace3

Apply for this job

+30d

VP/ Director, Data Science - Supply Chain

Blend36Columbia, MD, Remote
MLscalasqlazurepythonAWS

Blend36 is hiring a Remote VP/ Director, Data Science - Supply Chain

Job Description

At Blend360 we want to ensure that our clients have access to the data, insights, and innovations required to deliver against their Supply Chain Strategy. We are seeking a VP of Data Scientist, who can help advance the field within the Supply Chain Organization and deliver meaningful solutions that strive to solve our clients biggest challenges.

Accountabilities:

As a Supply Chain Data Scientist, you will build domain-specific knowledge regarding supply chain working closely with stakeholders to understand key business problems and bring Data Science solutions to resolve. You will also advance the development of  capability for Data Science within supply chain, including Artificial Intelligence (AI), and Machine Learning (ML). Your role will be pivotal in driving awareness of the value Data Science offers our clients with regard to global supply chain.

Summary with focus on communication: Data Scientists at Blend360 work with business leaders to solve our clients’ business challenges. Here at Blend360 we work with clients in marketing, revenue management, customer service, inventory management and many other aspects of modern business. Our Lead Data Scientists have the business acumen to apply Data Scientists to many different business models and situations.

We expect the Data Science Managers to be excellent communicators with the ability to describe complex concepts clearly and concisely. They should be able to work independently in gathering requirements, developing roadmaps, and delivering results.

Teamwork and Leadership: We work as a team and Data Science Managers lead both by mentoring or managing Data Scientists as well as leading by example.

Technical know-how: Our Data Scientists have a broad knowledge of a variety of data and mathematical solutions. Our work includes statistical analyses, predictive modeling, machine learning, and experimental design. We evaluate different sources of data, discover patterns hidden within raw data, create insightful variables, and develop competing models with different machine learning algorithms. We validate and cross-validate our recommendations to make sure our recommendations will perform well over time.

Conclusion: If you love to solve difficult problems and deliver results; if you like to learn new things and apply innovative, state-of-the-art methodology, join us at Blend360.

Responsibilities

  • Advance the development of l capability for Data Science within supply chain.
  • Build domain specific knowledge regarding supply chain.
  • Ability to provide ethical and positive leadership that motivates direct reports and develops their talent and skillset while achieving results.
  • Directly manage analyst project work and overall performance, including effective career planning; have difficult conversations and deliver constructive feedback with support from senior management.
  • Interview, hire and train new employees.
  • Analyze team KPIs, develop solutions and alternative methods to achieve goals.
  • Build positive and productive relationships with clients for business growth.
  • Understand client needs and customize existing business processes to meet client needs.
  • Promptly address client concerns and professionally manage requests.
  • Work as a strategic partner with leadership teams to support client needs.
  • Work with practice leaders and clients to understand business problems, industry context, data sources, potential risks, and constraints
  • Problem-solve with practice leaders to translate the business problem into a workable Data Science solution; propose different approaches and their pros and cons
  • Work with practice leaders to get stakeholder feedback, get alignment on approaches, deliverables, and roadmaps
  • Develop a project plan including milestones, dates, owners, and risks and contingency plans
  • Create and maintain efficient data pipelines, often within clients’ architecture. Typically, data are from a wide variety of sources, internal and external, and manipulated using SQL, spark, and Cloud big data technologies
  • Assemble large, complex data sets from client and external sources that meet functional business requirements.
  • Build analytics tools to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics.
  • Perform data cleaning/hygiene, data QC, and integrate data from both client internal and external data sources on Advanced Data Science Platform. Be able to summarize and describe data and data issues
  • Conduct statistical data analysis, including exploratory data analysis, data mining, and document key insights and findings toward decision making
  • Train, validate, and cross-validate predictive models and machine learning algorithms using state of the art Data Science techniques and tools
  • Document predictive models/machine learning results that can be incorporated into client-deliverable documentation
  • Assist client to deploy models and algorithms within their own architecture

Qualifications

  • MS degree in Statistics, Math, Data Analytics, or a related quantitative field
  • At least 5+ years Professional experience in Advanced Supply Chain Data Science 
  • Experience with one or more Advanced Data Science software languages (R, Python, Scala, SAS) 
  • Proven ability to deploy machine learning models from the research environment (Jupyter Notebooks) to production via procedural or pipeline approaches
  • Experience with SQL and relational databases, query authoring and tuning as well as working familiarity with a variety of databases including Hadoop/Hive
  • Experience with spark and data-frames in PySpark or Scala
  • Strong problem-solving skills; ability to pivot complex data to answer business questions. Proven ability to visualize data for influencing.
  • Comfortable with cloud-based platforms (AWS, Azure, Google)
  • Experience with Google Analytics, Adobe Analytics, Optimizely a plus

See more jobs at Blend36

Apply for this job

+30d

VP/Director Data Science -Supply Chain

Blend36Calgary, Canada, Remote
MLscalasqlazurepythonAWS

Blend36 is hiring a Remote VP/Director Data Science -Supply Chain

Job Description

At Blend360 we want to ensure that our clients have access to the data, insights, and innovations required to deliver against their Supply Chain Strategy. We are seeking a VP of Data Scientist, who can help advance the field within the Supply Chain Organization and deliver meaningful solutions that strive to solve our clients biggest challenges.

Accountabilities:

As a Supply Chain Data Scientist, you will build domain-specific knowledge regarding supply chain working closely with stakeholders to understand key business problems and bring Data Science solutions to resolve. You will also advance the development of  capability for Data Science within supply chain, including Artificial Intelligence (AI), and Machine Learning (ML). Your role will be pivotal in driving awareness of the value Data Science offers our clients with regard to global supply chain.

Summary with focus on communication: Data Scientists at Blend360 work with business leaders to solve our clients’ business challenges. Here at Blend360 we work with clients in marketing, revenue management, customer service, inventory management and many other aspects of modern business. Our Lead Data Scientists have the business acumen to apply Data Scientists to many different business models and situations.

We expect the Data Science Managers to be excellent communicators with the ability to describe complex concepts clearly and concisely. They should be able to work independently in gathering requirements, developing roadmaps, and delivering results.

Teamwork and Leadership: We work as a team and Data Science Managers lead both by mentoring or managing Data Scientists as well as leading by example.

Technical know-how: Our Data Scientists have a broad knowledge of a variety of data and mathematical solutions. Our work includes statistical analyses, predictive modeling, machine learning, and experimental design. We evaluate different sources of data, discover patterns hidden within raw data, create insightful variables, and develop competing models with different machine learning algorithms. We validate and cross-validate our recommendations to make sure our recommendations will perform well over time.

Conclusion: If you love to solve difficult problems and deliver results; if you like to learn new things and apply innovative, state-of-the-art methodology, join us at Blend360.

Responsibilities

  • Advance the development of l capability for Data Science within supply chain.
  • Build domain specific knowledge regarding supply chain.
  • Ability to provide ethical and positive leadership that motivates direct reports and develops their talent and skillset while achieving results.
  • Directly manage analyst project work and overall performance, including effective career planning; have difficult conversations and deliver constructive feedback with support from senior management.
  • Interview, hire and train new employees.
  • Analyze team KPIs, develop solutions and alternative methods to achieve goals.
  • Build positive and productive relationships with clients for business growth.
  • Understand client needs and customize existing business processes to meet client needs.
  • Promptly address client concerns and professionally manage requests.
  • Work as a strategic partner with leadership teams to support client needs.
  • Work with practice leaders and clients to understand business problems, industry context, data sources, potential risks, and constraints
  • Problem-solve with practice leaders to translate the business problem into a workable Data Science solution; propose different approaches and their pros and cons
  • Work with practice leaders to get stakeholder feedback, get alignment on approaches, deliverables, and roadmaps
  • Develop a project plan including milestones, dates, owners, and risks and contingency plans
  • Create and maintain efficient data pipelines, often within clients’ architecture. Typically, data are from a wide variety of sources, internal and external, and manipulated using SQL, spark, and Cloud big data technologies
  • Assemble large, complex data sets from client and external sources that meet functional business requirements.
  • Build analytics tools to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics.
  • Perform data cleaning/hygiene, data QC, and integrate data from both client internal and external data sources on Advanced Data Science Platform. Be able to summarize and describe data and data issues
  • Conduct statistical data analysis, including exploratory data analysis, data mining, and document key insights and findings toward decision making
  • Train, validate, and cross-validate predictive models and machine learning algorithms using state of the art Data Science techniques and tools
  • Document predictive models/machine learning results that can be incorporated into client-deliverable documentation
  • Assist client to deploy models and algorithms within their own architecture

Qualifications

  • MS degree in Statistics, Math, Data Analytics, or a related quantitative field
  • At least 5+ years Professional experience in Advanced Supply Chain Data Science 
  • Experience with one or more Advanced Data Science software languages (R, Python, Scala, SAS) 
  • Proven ability to deploy machine learning models from the research environment (Jupyter Notebooks) to production via procedural or pipeline approaches
  • Experience with SQL and relational databases, query authoring and tuning as well as working familiarity with a variety of databases including Hadoop/Hive
  • Experience with spark and data-frames in PySpark or Scala
  • Strong problem-solving skills; ability to pivot complex data to answer business questions. Proven ability to visualize data for influencing.
  • Comfortable with cloud-based platforms (AWS, Azure, Google)
  • Experience with Google Analytics, Adobe Analytics, Optimizely a plus

See more jobs at Blend36

Apply for this job

+30d

VP/Director, Data Science - Supply Chain

Blend36Toronto, Canada, Remote
MLscalasqlazurepythonAWS

Blend36 is hiring a Remote VP/Director, Data Science - Supply Chain

Job Description

At Blend360 we want to ensure that our clients have access to the data, insights, and innovations required to deliver against their Supply Chain Strategy. We are seeking a VP of Data Scientist, who can help advance the field within the Supply Chain Organization and deliver meaningful solutions that strive to solve our clients biggest challenges.

Accountabilities:

As a Supply Chain Data Scientist, you will build domain-specific knowledge regarding supply chain working closely with stakeholders to understand key business problems and bring Data Science solutions to resolve. You will also advance the development of  capability for Data Science within supply chain, including Artificial Intelligence (AI), and Machine Learning (ML). Your role will be pivotal in driving awareness of the value Data Science offers our clients with regard to global supply chain.

Summary with focus on communication: Data Scientists at Blend360 work with business leaders to solve our clients’ business challenges. Here at Blend360 we work with clients in marketing, revenue management, customer service, inventory management and many other aspects of modern business. Our Lead Data Scientists have the business acumen to apply Data Scientists to many different business models and situations.

We expect the Data Science Managers to be excellent communicators with the ability to describe complex concepts clearly and concisely. They should be able to work independently in gathering requirements, developing roadmaps, and delivering results.

Teamwork and Leadership: We work as a team and Data Science Managers lead both by mentoring or managing Data Scientists as well as leading by example.

Technical know-how: Our Data Scientists have a broad knowledge of a variety of data and mathematical solutions. Our work includes statistical analyses, predictive modeling, machine learning, and experimental design. We evaluate different sources of data, discover patterns hidden within raw data, create insightful variables, and develop competing models with different machine learning algorithms. We validate and cross-validate our recommendations to make sure our recommendations will perform well over time.

Conclusion: If you love to solve difficult problems and deliver results; if you like to learn new things and apply innovative, state-of-the-art methodology, join us at Blend360.

Responsibilities

  • Advance the development of l capability for Data Science within supply chain.
  • Build domain specific knowledge regarding supply chain.
  • Ability to provide ethical and positive leadership that motivates direct reports and develops their talent and skillset while achieving results.
  • Directly manage analyst project work and overall performance, including effective career planning; have difficult conversations and deliver constructive feedback with support from senior management.
  • Interview, hire and train new employees.
  • Analyze team KPIs, develop solutions and alternative methods to achieve goals.
  • Build positive and productive relationships with clients for business growth.
  • Understand client needs and customize existing business processes to meet client needs.
  • Promptly address client concerns and professionally manage requests.
  • Work as a strategic partner with leadership teams to support client needs.
  • Work with practice leaders and clients to understand business problems, industry context, data sources, potential risks, and constraints
  • Problem-solve with practice leaders to translate the business problem into a workable Data Science solution; propose different approaches and their pros and cons
  • Work with practice leaders to get stakeholder feedback, get alignment on approaches, deliverables, and roadmaps
  • Develop a project plan including milestones, dates, owners, and risks and contingency plans
  • Create and maintain efficient data pipelines, often within clients’ architecture. Typically, data are from a wide variety of sources, internal and external, and manipulated using SQL, spark, and Cloud big data technologies
  • Assemble large, complex data sets from client and external sources that meet functional business requirements.
  • Build analytics tools to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics.
  • Perform data cleaning/hygiene, data QC, and integrate data from both client internal and external data sources on Advanced Data Science Platform. Be able to summarize and describe data and data issues
  • Conduct statistical data analysis, including exploratory data analysis, data mining, and document key insights and findings toward decision making
  • Train, validate, and cross-validate predictive models and machine learning algorithms using state of the art Data Science techniques and tools
  • Document predictive models/machine learning results that can be incorporated into client-deliverable documentation
  • Assist client to deploy models and algorithms within their own architecture

Qualifications

  • MS degree in Statistics, Math, Data Analytics, or a related quantitative field
  • At least 5+ years Professional experience in Advanced Supply Chain Data Science 
  • Experience with one or more Advanced Data Science software languages (R, Python, Scala, SAS) 
  • Proven ability to deploy machine learning models from the research environment (Jupyter Notebooks) to production via procedural or pipeline approaches
  • Experience with SQL and relational databases, query authoring and tuning as well as working familiarity with a variety of databases including Hadoop/Hive
  • Experience with spark and data-frames in PySpark or Scala
  • Strong problem-solving skills; ability to pivot complex data to answer business questions. Proven ability to visualize data for influencing.
  • Comfortable with cloud-based platforms (AWS, Azure, Google)
  • Experience with Google Analytics, Adobe Analytics, Optimizely a plus

See more jobs at Blend36

Apply for this job

+30d

Senior Software Engineer EMEA

TetraScienceIreland, Remote
MLagileDesign

TetraScience is hiring a Remote Senior Software Engineer EMEA

Who We Are 

TetraScience is the Scientific Data and AI Cloud company with a mission to radically improve and extend human life. TetraScience combines the world's only open, purpose-built, and collaborative scientific data and AI cloud with deep scientific expertise across the value chain to accelerate and improve scientific outcomes. TetraScience is catalyzing the Scientific AI revolution by designing and industrializing AI-native scientific data sets, which it brings to life in a growing suite of next generation lab data management products, scientific use cases, and AI-based outcomes. For more information, please visit tetrascience.com.

Our core values are designed to guide our behaviors, actions, and decisions such that we operate as one. We are looking to add individuals to our team that demonstrate the following values:

  • Transparency and Context- We execute on our ambitious mission by starting with radical data transparency and business context. We openly and proactively share all vital data and make it actionable, so our employees and stakeholders can solve any problem presented to them.
  • Trust and Collaboration- We are committed to always communicating openly and honestly at every level of the organization, functionally, cross-functionally, internally, and externally. Empowering our employees will drive positive change across our entire ecosystem.
  • Fearlessness and Resilience- We must be fearless and resilient to fulfill our potential. We proactively run toward challenges of all types, we unblinkingly acknowledge and confront the brutal facts - which all innovative growth companies invariably face – and we embrace uncertainty and take calculated risks.
  • Alignment with Customers- We know that our customers' success is our success. We are honored and humbled by their commitment to us, and we are completely committed to ensuring they achieve their mission to unlock the world’s most important scientific innovations.
  • Commitment to Craft- We take our craft seriously and seek to be best-in-class in all we do, regardless of our functional role, seniority, or tenure. We are members of one team that combines intellectual horsepower and curiosity, humility, and empathy to ensure we are always learning and evolving.
  • Equality of Opportunity- We cannot imagine our journey without a workforce which reflects humanity’s diversity. We seek out the best of the best who bring with them unique and invaluable perspectives and talents and embody our common values - regardless of gender, ethnicity, race, or age.

We are seeking a talented Senior Software Engineer to join our team. In this role, you will be responsible for developing and maintaining high-quality software solutions that enable scientists and researchers to leverage and analyze complex datasets. You will work closely with cross-functional teams to understand user requirements, design and implement scalable software architectures, and ensure the reliability and performance of our software products.

Who you are

You are dedicated to mastering your craft and driven by a passion for innovation. You are an inventor. Committed to continually enhancing both your deliverables and the processes by which you deliver them. You take full ownership of all aspects of software development, ensuring the highest quality outcomes. You exhibit curiosity and a relentless pursuit of understanding and clarity. You view every interaction as a chance to learn and grow. Balancing humility with confidence, you recognize that you don't need to be the smartest person in the room to make a significant impact. Instead, you focus on elevating those around you, contributing to the collective success of your team.

What You Will Do

  • Join the TetraScience engineering team to develop a cutting-edge application for our customers.
  • Design and implement scalable foundational services to support data pipeline processing, search functionality, user management, and other customer-facing features.
  • Work on ML infrastructure and Generative AI applications to advance scientific use cases.
  • Build and deliver high-quality products using Agile software development methodologies.
  • Collaborate with the product management team to transform vision and ideas into tangible results.
  • Work with a geographically distributed team across various time zones.
  • Engage in continuous learning, growth, and professional development.
  • Articulate your vision to peers and leadership, while being open to constructive feedback and maintaining resilience.
  • Strong proficiency (7+ years) in Node.js, TypeScript, and Python.
  • Over 7 years of experience with Amazon Web Services (AWS).
  • Extensive experience with databases, including relational, NoSQL, and data platform infrastructure.
  • Experience with big data distributed systems, such as Databricks, Snowflake, AWS EMR, Glue, and other lakehouse technologies.
  • Proven track record with large-scale data distributed systems.
  • Excellent verbal, written, and presentation skills.
  • Bachelor’s degree in Computer Science or a related field.

    • Competitive Salary and equity in a fast-growing company.
    • Supportive, team-oriented culture of continuous improvement.
    • Generous paid time off (PTO).
    • Flexible working arrangements - Remote work.

#LIRemote

See more jobs at TetraScience

Apply for this job

+30d

Senior Software Engineer

TetraScienceBoston,Massachusetts,United States, Remote
MLagileDesign

TetraScience is hiring a Remote Senior Software Engineer

Who We Are 

TetraScience is the Scientific Data and AI Cloud company with a mission to radically improve and extend human life. TetraScience combines the world's only open, purpose-built, and collaborative scientific data and AI cloud with deep scientific expertise across the value chain to accelerate and improve scientific outcomes. TetraScience is catalyzing the Scientific AI revolution by designing and industrializing AI-native scientific data sets, which it brings to life in a growing suite of next generation lab data management products, scientific use cases, and AI-based outcomes. For more information, please visit tetrascience.com.

Our core values are designed to guide our behaviors, actions, and decisions such that we operate as one. We are looking to add individuals to our team that demonstrate the following values:

  • Transparency and Context- We execute on our ambitious mission by starting with radical data transparency and business context. We openly and proactively share all vital data and make it actionable, so our employees and stakeholders can solve any problem presented to them.
  • Trust and Collaboration- We are committed to always communicating openly and honestly at every level of the organization, functionally, cross-functionally, internally, and externally. Empowering our employees will drive positive change across our entire ecosystem.
  • Fearlessness and Resilience- We must be fearless and resilient to fulfill our potential. We proactively run toward challenges of all types, we unblinkingly acknowledge and confront the brutal facts - which all innovative growth companies invariably face – and we embrace uncertainty and take calculated risks.
  • Alignment with Customers- We know that our customers' success is our success. We are honored and humbled by their commitment to us, and we are completely committed to ensuring they achieve their mission to unlock the world’s most important scientific innovations.
  • Commitment to Craft- We take our craft seriously and seek to be best-in-class in all we do, regardless of our functional role, seniority, or tenure. We are members of one team that combines intellectual horsepower and curiosity, humility, and empathy to ensure we are always learning and evolving.
  • Equality of Opportunity- We cannot imagine our journey without a workforce which reflects humanity’s diversity. We seek out the best of the best who bring with them unique and invaluable perspectives and talents and embody our common values - regardless of gender, ethnicity, race, or age.

We are seeking a talented Senior Software Engineer to join our team. In this role, you will be responsible for developing and maintaining high-quality software solutions that enable scientists and researchers to leverage and analyze complex datasets. You will work closely with cross-functional teams to understand user requirements, design and implement scalable software architectures, and ensure the reliability and performance of our software products.

Who you are

You are dedicated to mastering your craft and driven by a passion for innovation. You are an inventor. Committed to continually enhancing both your deliverables and the processes by which you deliver them. You take full ownership of all aspects of software development, ensuring the highest quality outcomes. You exhibit curiosity and a relentless pursuit of understanding and clarity. You view every interaction as a chance to learn and grow. Balancing humility with confidence, you recognize that you don't need to be the smartest person in the room to make a significant impact. Instead, you focus on elevating those around you, contributing to the collective success of your team.

What You Will Do

  • Join the TetraScience engineering team to develop a cutting-edge application for our customers.
  • Design and implement scalable foundational services to support data pipeline processing, search functionality, user management, and other customer-facing features.
  • Work on ML infrastructure and Generative AI applications to advance scientific use cases.
  • Build and deliver high-quality products using Agile software development methodologies.
  • Collaborate with the product management team to transform vision and ideas into tangible results.
  • Work with a geographically distributed team across various time zones.
  • Engage in continuous learning, growth, and professional development.
  • Articulate your vision to peers and leadership, while being open to constructive feedback and maintaining resilience.
  • Strong proficiency (7+ years) in Node.js, TypeScript, and Python.
  • Over 7 years of experience with Amazon Web Services (AWS).
  • Extensive experience with databases, including relational, NoSQL, and data platform infrastructure.
  • Experience with big data distributed systems, such as Databricks, Snowflake, AWS EMR, Glue, and other lakehouse technologies.
  • Proven track record with large-scale data distributed systems.
  • Excellent verbal, written, and presentation skills.
  • Bachelor’s degree in Computer Science or a related field.

  • 100% employer-paid benefits for all eligible employees and immediate family members
  • Unlimited paid time off (PTO)
  • 401K
  • Flexible working arrangements - Remote work + office as needed
  • Company paid Life Insurance, LTD/STD

We are not currently providing visa sponsorship for this position

#LIRemote

See more jobs at TetraScience

Apply for this job

+30d

Data Engineer II

Agile SixUnited States, Remote
MLagileDesignapigitc++pythonbackend

Agile Six is hiring a Remote Data Engineer II

Agile Six is a people-first, remote-work company that serves shoulder-to-shoulder with federal agencies to find innovative, human-centered solutions. We build better by putting people first. We are animated by our core values of Purpose, Wholeness, Trust, Self-Management and Inclusion. We deliver our solutions in autonomous teams of self-managed professionals (no managers here!) who genuinely care about each other and the work. We know that’s our company’s purpose – and that we can only achieve it by supporting a culture where people feel valued, self-managed, and love to come to work.

The role

Agile Six is looking for a Data Engineer for an anticipated role on our cross-functional agile teams. Our partners include: the Department of Veteran Affairs (VA), Centers for Medicare & Medicaid Services (CMS), Centers for Disease Control and Prevention (CDC) and others. 

The successful candidate will bring their experience in data formatting and integration engineering to help us expand a reporting platform. As part of the team, you will primarily be responsible for data cleaning and data management tasks, building data pipelines, and data modeling (designing the schema/structure of datasets and relationships between datasets). We are looking for someone who enjoys working on solutions to highly complex problems and someone who is patient enough to deal with the complexities of navigating the Civic Tech space. The successful candidate for this role is an excellent communicator, as well as someone who is curious about where data analysis, backend development, data engineering, and data science intersect.

We embrace open source software and an open ethos regarding software development, and are looking for a candidate who does the same. Most importantly, we are looking for someone with a passion for working on important problems that have a lasting impact on millions of users and make a difference in our government!

Please note, this position is anticipated, pending contract award response.

Responsibilities

  • Contribute as a member of a cross functional Agile team using your expertise in data engineering, critical thinking, and collaboration to solve problems related to the project
    • Experience with Java/Kotlin/Python, command line, and Git is required
    • Experience with transport protocols including: REST, SFTP, SOAP is required
    • Experience with HL7 2.5.1 and FHIR is strongly preferred
  • Extract, transform, and load data. Pull together datasets, build data pipelines, and turn semi-structured and unstructured data into datasets that can be used for machine learning models.
  • Evaluate and recommend
  • We expect the responsibilities of this position to shift and grow organically over time, in response to considerations such as the unique strengths and interests of the selected candidate and other team members and an evolving understanding of the delivery environment.

Basic qualifications

  • 2+ years of hands-on data engineering experience in a production environment
  • Experience with Java/Kotlin/Python, command line, and Git
  • Demonstrated experience with extract, transform, load (ETL) and data cleaning, data manipulation, and data management
  • Demonstrated experience building and orchestrating automated data pipelines in Java/Python
  • Experience with data modeling: defining the schema/structure of datasets and the relationships between datasets
  • Ability to create usable datasets from semi-structured and unstructured data
  • Solution-oriented mindset and proactive approach to solving complex problems
  • Ability to be autonomous, take initiative, and effectively communicate status and progress
  • Experience successfully collaborating with cross-functional partners and other designers and researchers, seeking and providing feedback in an Agile environment
  • Adaptive, empathetic, collaborative, and holds a positive mindset
  • Has lived and worked in the United States for 3 out of the last 5 years
  • Some of our clients may request or require travel from time to time. If this is a concern for you, we encourage you to apply and discuss it with us at your initial interview

Additional desired qualifications

  • Familiarity with the Electronic Laboratory Reporting workflows and data flow
  • Knowledge of FHIR data / API standard, HL7 2.5.1
  • Experience building or maintaining web service APIs
  • Familiarity with various machine learning (ML) algorithms and their application to common ML problems (e.g. regression, classification, clustering)
  • Statistical experience or degree
  • Experience developing knowledge of complex domain and systems
  • Experience working with government agencies
  • Ability to work across multiple applications, components, languages, and frameworks
  • Experience working in a cross-functional team, including research, design, engineering, and product
  • You are a U.S. Veteran. As a service-disabled veteran-owned small business, we recognize the transition to civilian life can be tricky, and welcome and encourage Veterans to apply

At Agile Six, we are committed to building teams that represent a variety of backgrounds, perspectives, and skills. Even if you don't meet every requirement, we encourage you to apply. We’re eager to meet people who believe in our mission and who can contribute to our team in a variety of ways.

Salary and Sixer Benefits

To promote equal pay for equal work, we publish salary ranges for each position.

The salary range for this position is $119,931-$126,081

Our benefits are designed to reinforce our core values of Wholeness, Self Management and Inclusion. The following benefits are available to all employees. We respect that only you know what balance means for your life and season. While we offer support from coaches, we expect you to own your wholeness, show up for work whole, and go home to your family the same. You will be seen, heard and valued. We expect you to offer the same for your colleagues, be kind (not controlling), be caring (not directive) and ready to participate in a state of flow. We mean it when we say “We build better by putting people first”.

All Sixers Enjoy:

  • Self-managed work/life balance and flexibility
  • Competitive and equitable salary (equal pay for equal work)
  • Employee Stock Ownership (ESOP) for all employees!
  • 401K matching
  • Medical, dental, and vision insurance
  • Employer paid short and long term disability insurance
  • Employer paid life insurance
  • Self-managed and generous paid time off
  • Paid federal holidays and Election day off
  • Paid parental leave
  • Self-managed professional development spending
  • Self-managed wellness days

Hiring practices

Agile Six Applications, Inc. is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, national origin, ancestry, sex, sexual orientation, gender identity or expression, religion, age, pregnancy, disability, work-related injury, covered veteran status, political ideology, marital status, or any other factor that the law protects from employment discrimination.

Note: We participate in E-Verify. Upon hire, we will provide the federal government with your Form I-9 information to confirm that you are authorized to work in the U.S. Unfortunately, we are unable to sponsor visas at this time.

If you need assistance or reasonable accommodation in applying for any of these positions, please reach out to careers@agile6.com. We want to ensure you have the ability to apply for any position at Agile Six.

Please read and respond to the application questions carefully. Interviews are conducted on a rolling basis until the position has been filled.

 

Apply for this job