Senior Machine Learning Engineer

Senior Machine Learning Engineer

Raymond James is looking for a Senior Machine Learning Engineer to join its Data Science team. This individual will be responsible for architecting, designing and building enterprise-scale machine learning applications as a core service. This will include taking models developed by data scientists and integrating it in with the rest of the company’s platform. The goal will be to bring machine learning and optimization models into production together with a highly multi-disciplinary team of data scientists, data engineers, business analysts, strategic partners, product managers and domain experts. This is a new role within the organization, so the desire and ability to quickly ramp up and grow into a technical lead as we continue to build out the team will be crucial for long term success. 


  • Partners with data scientists, data engineers, analysts, subject matter experts and business stakeholders to operationalize models and deliver insights to the business.
  • Architects platforms which are highly modular, scalable, and responsive; and that serve as a foundation for building key enterprise Data Science components. Ensures that the Machine Learning code, models and pipelines are deployed successfully into production, and troubleshoots issues that arise.
  • Automates model training and testing and deployment via machine learning continuous delivery pipelines.
  • Builds data APIs and data delivery services that support critical operational and analytical applications for internal business operations, customers and partners
  • Involved in all aspects of SDLC process from requirements, design, and build through deployment.
  • Translates business requirements into working foundational components for platform thus ensuring functional and non-functional aspects are met.
  • Defines strategic direction and develops tactical plans. Works with application and infrastructure teams to provision platform components.
  • Effectively identifies opportunities for change, implements change and introduces new concepts, procedures, policies and tools while providing a clear explanation of benefits and purpose.
  • Documenting architectural standards, best practices and mentoring application teams on developing highly distributed, resilient and responsive applications.
  • Serves as the primary point of contact on the most complex or escalated issues and may provide direction and guidance to team members.
  • Understands and incorporates best practices in security and data protection.


  • Minimum of a B.S/M.S. in Computer Science, Electrical Engineering or related degree and five to seven (5-7) years of related experience or a combination of education, training and experience.
  • Seven to Ten (10) years of experience in architecting and building high performance enterprise scale applications strongly preferred. Experience with design patterns and implementation and deployment AI and/or data science products.
  • Expertise in the use of a variety of classic and modern machine learning techniques including clustering, decision trees, classification, regression and neural networks/deep learning. Expertise in mining complex data (including structured and unstructured), identifying patterns, and feature engineering
  • Experienced in using AI/ML platforms, technologies, techniques (e.g. TensorFlow, scikit-learn, Spark ML/MLlib, etc.)
  • Highly experienced with back-end programming languages and associated frameworks (Python/Flask preferred). Proficient in the use of front-end programming languages (HTML, CSS, JavaScript) and familiarity with common javascript libraries (d3) and frameworks.
  • Experience with distributed version control systems and tools (Github/Git) and with automating application deployment, continuous delivery, and continuous integration (Jenkins)
  • Experienced with deploying and managing infrastructures based on Docker or Kubernetes, and cloud platforms such as Azure, AWS or Google Cloud Platform.
  • Experience in designing, building, and deploying production-level data pipelines using tools from Hadoop stack (HDFS, Hive, Spark, HBase, Kafka)
  • Knowledge of data engineering and experience with profiling and optimizing large scale data processing systems; Familiarity with Linux and scripting languages.
  • Proficiency with SQL and NoSQL databases
  • Experience implementing and integrating with RESTful API services (preferred)


  • None required.