
Senior AI Engineer
Paramount
Burbank, CAThis was removed by the employer on 5/1/2026 9:44:00 PM PST
This is a Full Time Job
#WeAreParamount on a mission to unleash the power of content… you in?
We’ve got the brands, we’ve got the stars, we’ve got the power to achieve our mission to entertain the planet – now all we’re missing is… YOU! Becoming a part of Paramount means joining a team of passionate people who not only recognize the power of content but also enjoy a touch of fun and uniqueness. Together, we co-create moments that matter – both for our audiences and our employees – and aim to leave a positive mark on culture.
Job Title: Senior Applied AI Engineer
Team: Global Quality Engineering
Location: New York City, Los Angeles, San Francisco
Overview
Paramount Skydance Corp. is seeking a Senior Applied AI Engineer to architect, build, and operationalize AI-driven solutions that transform how we deliver software quality across the enterprise. This role blends advanced machine learning, large language models, and software engineering expertise to improve automation efficiency, accelerate feedback loops, enhance defect detection, and deliver predictive quality insights.
You will be a key member of the Global Quality Engineering (GQE) team and partner with DevOps, SRE, and Infosec teams to embed AI capabilities directly into the SDLC, leveraging modern platforms such as Vertex AI to deliver scalable, resilient, and impactful AI solutions for Quality Engineering initiatives.
Key Responsibilities
AI/ML Solution Development
• Architect, develop, and deploy end-to-end AI/ML systems addressing key QE workflows (e.g., bug prediction, app confidence scoring for incremental releases, flaky test detection, intelligent test prioritization, anomaly detection).
• Build, optimize, and tune RAG pipelines, including:
• embedding and vector store selection
• chunking and retrieval optimization
• hallucination mitigation and grounding techniques
• hybrid LLM architectures
• Perform LLM fine-tuning (full-model, LoRA/QLoRA, instruction tuning) and determine when fine-tuning is appropriate vs. RAG-only or hybrid approaches.
• Build LLM tools for:
• test case generation (manual and automated)
• synthetic test data creation
• log and telemetry summarization
• automated triage and quality insights
• Develop model evaluation frameworks ensuring accuracy, robustness, and safe behavior over time.
Global Quality Engineering Innovation
• Identify and prioritize opportunities to integrate AI automation across test strategy, execution, triage, and release decisioning.
• Integrate AI into CI/CD pipelines for dynamic risk-based testing, anomaly detection, and intelligent quality gates.
• Build solutions that analyze logs, traces, telemetry, and user signals to detect emerging quality risks.
Cloud & Platform Engineering (Vertex AI)
• Leverage Google Cloud Vertex AI to build scalable, production-grade AI systems, including:
• Model Garden
• Vertex AI Training, Tuning (LoRA/QLoRA), and Custom Jobs
• Vertex AI Vector Search for high-performance retrieval
• Vertex AI Pipelines for automated ML workflows
• Vertex AI Online Endpoints for real-time inference
• Integrate Vertex AI with GCP services (BigQuery, Cloud Run, GKE, Pub/Sub) for full production deployment.
Technical Leadership
• Lead architectural decisions on LLM system design, MLOps, data pipelines, and monitoring strategies.
• Mentor engineers on applied ML, modern AI development, prompt engineering, and RAG-vs-fine-tuning tradeoffs.
• Partner in the creation of engineering standards for model governance, safety, code quality, and scalable AI development.
Cross-Functional Collaboration
• Collaborate with peers in GQE as well as DevOps, SRE and Infosec teams to translate quality challenges into high-value AI solutions that accelerate testing.
• Work closely with Data Engineering to ensure training data quality, governance, privacy, and compliance.
• Clearly communicate complex concepts to a variety of audiences including executives, engineers, and non-technical stakeholders.
Required Qualifications
• 7 years of experience in machine learning engineering, software engineering, or applied AI.
• Strong expertise in Java, Python, PyTorch/TensorFlow, and modern LLM tooling.
• Deep hands-on experience with RAG systems, including:
• vector database design and embedding evaluation
• retrieval optimization and hybrid architectures
• hallucination reduction and grounding strategies
• Strong hands-on experience with LLM fine-tuning, including:
• full-model and parameter-efficient approaches
• LoRA/QLoRA, instruction tuning, dataset curation
• cost, latency, and behavior tradeoff analysis
• Expertise selecting between RAG vs. fine-tuning vs. hybrid approaches based on data characteristics, quality needs, and business constraints.
• Production experience with Google Cloud Vertex AI, including training, tuning, pipelines, Vector Search, and real-time model deployment.
• Solid understanding of quality engineering tools, automation frameworks (Selenium, Appium, Playwright, pytest, JUnit, TestNG), and CI/CD systems.
• Experience with MLOps platforms (MLflow, Kubeflow, SageMaker, Databricks) and cloud platforms (Azure, AWS, or GCP).
Preferred Qualifications
• Experience building AI systems specifically for engineering productivity or quality engineering.
• Familiarity with:
• observability tools (Grafana, Prometheus, OpenTelemetry)
• synthetic data generation
• code analysis and static/dynamic analysis tools
• Experience mentoring engineers or serving as a technical lead on cross-functional AI projects.
Key Competencies
• Architectural judgment - mastery of when to apply RAG, fine-tuning, hybrid retrieval models, or classical ML.
• Innovation mindset - constantly identifying opportunities to improve velocity, quality, and automation with AI.
• Systems thinking - ability to model complex SDLC and QE workflows.
• Strong communication and the ability to drive consensus among stakeholders.
• Ownership - ability to drive initiatives end-to-end with autonomy.
Success Metrics
• Reduction in escaped defects and increased early bug detection.
• Significant reduction in flaky tests and triage time.
• Faster release cycles through intelligent, AI-driven testing strategies.
• Improved developer and tester productivity via AI-powered tooling.
• Successful deployment and adoption of enterprise-grade AI systems across the org.
#LI-JC1