Company DescriptionIt all started in sunny San Diego, California in 2004 when a visionary engineer, Fred Luddy, saw the potential to transform how we work. Fast forward to today - ServiceNow stands as a global market leader, bringing innovative AI-enhanced technology to over 8,100 customers, including 85% of the Fortune 500\xc2\xae. Our intelligent cloud-based platform seamlessly connects people, systems, and processes to empower organizations to find smarter, faster, and better ways to work. But this is just the beginning of our journey. Join us as we pursue our purpose to make the world work better for everyone.TeamServiceNow Research does both fundamental and applied research to futureproof AI-powered experiences. We are a group of researchers, applied scientists, and developers who lay the foundations, research, experiment, and de-risk AI technologies that unlock new work experiences in the future. In this role, you will be part of the very dynamic AI Trust & Governance Lab, whose objective is to promote breakthroughs and advances in trustworthy AI. Our work stretches across the main pillars that drive trust, including safety, reliability, robustness, security, and more. We work on methods to identify, understand and measure existing and emergent risks and capabilities. We do this through research, experimentation, prototyping and advising to help teams throughout the company strengthen and deepen their approach to trustworthy AI.L\'\xc3\xa9quipeServiceNow Research m\xc3\xa8ne des recherches fondamentales et appliqu\xc3\xa9es afin d\'assurer la p\xc3\xa9rennit\xc3\xa9 des exp\xc3\xa9riences aliment\xc3\xa9es par l\'IA. Nous sommes un groupe de chercheurs, de scientifiques appliqu\xc3\xa9s et de d\xc3\xa9veloppeurs qui posent les fondations, recherchent, exp\xc3\xa9rimentent et \xc3\xa9liminent les risques des technologies d\'IA qui d\xc3\xa9bloquent de nouvelles exp\xc3\xa9riences de travail \xc3\xa0 l\'avenir. Dans ce r\xc3\xb4le, vous ferez partie du tr\xc3\xa8s dynamique AI Trust & Governance Lab, dont l\'objectif est de promouvoir les perc\xc3\xa9es et les avanc\xc3\xa9es dans le domaine de l\'IA digne de confiance. Nos travaux portent sur les principaux piliers de la confiance, notamment la s\xc3\xbbret\xc3\xa9, la fiabilit\xc3\xa9, la robustesse et la s\xc3\xa9curit\xc3\xa9. Nous travaillons sur des m\xc3\xa9thodes permettant d\'identifier, de comprendre et de mesurer les risques et les capacit\xc3\xa9s existants et \xc3\xa9mergents. Nous le faisons par le biais de la recherche, de l\'exp\xc3\xa9rimentation, du prototypage et du conseil pour aider les \xc3\xa9quipes de toute l\'entreprise \xc3\xa0 renforcer et \xc3\xa0 approfondir leur approche de l\'IA digne de confiance.RoleApplied research involves applying concepts and methods emerging in fundamental research to real world contexts, exploring ways in which they might need to be changed to increase their relevance and scalability for teams throughout the company. In this role, you will work alongside other Trust & Governance applied researchers in the field of trustworthy AI. Overwhelmingly, this will involve working on challenges associated with large, generative models across a wide variety of use cases. In some cases, youmight work on training new models; in others, you might focus on risk detection and measurement for models built or fine-tuned by others. You will thrive in this role by being able to quickly understand the problem at hand, consider potential solutions, and validate your ideas. As part of the Trust & Governance Lab, good communication skills are a must to bridge gaps between teams in fundamental research, product, governance, and beyond.R\xc3\xb4leLa recherche appliqu\xc3\xa9e consiste \xc3\xa0 appliquer les concepts et les m\xc3\xa9thodes issus de la recherche fondamentale \xc3\xa0 des contextes r\xc3\xa9els, en explorant les moyens de les modifier pour accro\xc3\xaetre leur pertinence et leur \xc3\xa9volutivit\xc3\xa9 pour les \xc3\xa9quipes de l\'ensemble de l\'entreprise. Dans ce r\xc3\xb4le, vous travaillerez aux c\xc3\xb4t\xc3\xa9s d\'autres chercheurs appliqu\xc3\xa9s en mati\xc3\xa8re de confiance et de gouvernance dans le domaine de l\'IA digne de confiance. La plupart du temps, il s\'agira de travailler sur des d\xc3\xa9fis associ\xc3\xa9s \xc3\xa0 de grands mod\xc3\xa8les g\xc3\xa9n\xc3\xa9ratifs dans une grande vari\xc3\xa9t\xc3\xa9 de cas d\'utilisation. Dans certains cas, vous travaillerez \xc3\xa0 la formation de nouveaux mod\xc3\xa8les ; dans d\'autres, vous vous concentrerez sur la d\xc3\xa9tection et la mesure des risques pour les mod\xc3\xa8les construits ou affin\xc3\xa9s par d\'autres. Vous vous \xc3\xa9panouirez dans ce r\xc3\xb4le en \xc3\xa9tant capable de comprendre rapidement le probl\xc3\xa8me, d\'envisager des solutions potentielles et de valider vos id\xc3\xa9es. Dans le cadre du Trust & Governance Lab, de bonnes comp\xc3\xa9tences en communication sont indispensables pour faire le pont entre les \xc3\xa9quipes de recherche fondamentale, de produits, de gouvernance et autres.QualificationsTo be successful in this role you have:
MNCJobz.com will not be responsible for any payment made to a third-party. All Terms of Use are applicable.