Responsible AI at ZEISS: Compliance Without Slowing Innovation – with Alexandra Wander, ZEISS
Shownotes
ZEISS built a Responsible AI Office so AI governance doesn’t become a bottleneck. In this episode, Alexandra Wander (Program Manager for Responsible AI, ZEISS) explains why ZEISS built the office and how it orchestrates AI governance across a global, federated organization. We discuss EU AI Act compliance, risk management, AI security, and why data governance is inseparable from AI governance – plus practical steps to build policies, training, and scalable controls without slowing innovation.
This episode is part of our DATA Festival series, featuring speakers from our upcoming event in Munich. Stay tuned for more exciting insights from industry leaders sharing their cutting-edge projects and innovations.
Meet Alexandra on stage at the DATA Festival Munich in June – one of Europe's leading events for data, AI, and technology leaders.
👉 Save your spot now: https://hubs.li/Q044Z4qG0 ⏰ Limited time: 30% off with code PARTY30 until April 30
Alexandra Wander on LinkedIn: https://tinyurl.com/f9usj9py Florian Bigelmaier on LinkedIn: https://tinyurl.com/4z84k8v7 Carsten Bange on LinkedIn: ttps://tinyurl.com/4j96bfnf BARC on LinkedIn: https://tinyurl.com/3ft3vpxv
Transkript anzeigen
00:00:00: It's not only having this high-level governance slide that you might get in any consultancy, but it is really about getting into the processes that Thais has implemented as a company and then understanding how people work.
00:00:31: Hello and welcome to The Data Culture Podcast!
00:00:34: I'm Karsten Bange, Founder & CEO of Barg And with me today is again Florian Beigelmeier because we are in special data festival edition.
00:00:44: With us today is Alexandra Wanda from Zeiss and she's a program manager, the responsible AI office lead.
00:00:53: And that will be our topic!
00:00:55: We'll talk about AI governance and how it's implemented at Zeiss.
00:00:58: Florian I thought she had really interesting insights to share didn't she?
00:01:03: Absolutely so.
00:01:06: one thing that definitely sticks out for me or was interesting that there are many drivers to do Responsible AI.
00:01:14: Of course, there is risk of course there's regulation you want to comply with but it's also about maintaining trust of your customers actually in a company into the systems and products a company provides.
00:01:26: We talked about what does it mean?
00:01:29: What is responsible AI And we also dived into their question how to implement it actually, what can we reuse?
00:01:36: Which tips does she have for companies out there that want you start this journey as well.
00:01:42: So Karsten why don't we just dive into it.
00:01:45: let's go!
00:01:48: Hi Alexander great to have you.
00:01:50: hi Karsten, I'm Florian and i am so excited to be here.
00:01:54: excellent Alexandra You are leading a responsible AI office.
00:02:01: How did that start?
00:02:02: Why is there a responsible AI
00:02:05: office?".
00:02:06: That's us.
00:02:07: Yeah, great question!
00:02:10: The Responsible AI Office was actually an initiative from my predecessor and the initial trigger as probably in many organizations was the UAI Act coming into force in twenty-twenty four.
00:02:23: so when she being a lawyer pushed like responsibility across size brought the initiative to life.
00:02:31: Her vision that I've just picked up, if you will was to bring people from interdisciplinary backgrounds together.
00:02:40: To one table in our case and to an office.
00:02:43: That's like.
00:02:44: we have a lawyer From my legal department who is looking at IT law or tech law.
00:02:49: We also have cyber security information security at the table And data privacy in our round-in the office and we're looking also at ethics like, not only is it allowed to do that from a legal perspective using AI but should redo that from an ethical perspective.
00:03:13: Really good you have very interdisciplinary approach.
00:03:18: what's the driver behind besides regulation?
00:03:21: You think on responsible AI Really broad topic including ethics and privacy and so on which might even go a little beyond what
00:03:32: the
00:03:33: AI act is asking for.
00:03:36: Yeah, I mean if you look at it I act to first.
00:03:39: yes trigger was legislation and compliance of course.
00:03:44: um The second thing as if he looked beyond.
00:03:48: All these different disciplines come together when you look at AI.
00:03:52: And I'd like to also answer with an example that's recently going through the press without naming names and explicitly disclaiming.
00:03:58: this is my personal opinion in that matter, right?
00:04:03: There are some AI enhanced glasses on the market, it's a fantastic tool for people who have maybe hearing disabilities can actually be enabled pardon conversations if these losses can record and translate in real time, right?
00:04:19: However.
00:04:20: If these recordings are not treated like ethically but go let's say to workers who are training the eye using all the recordings that they see And people aren't made aware of while being recoupled using glasses record their bank transactions a recording Like in the bathroom, which is kind of awkward right?
00:04:44: You don't want to see anyone else across the world what you see and that moment.
00:04:49: And yet that happens.
00:04:50: so ultimately when it comes through this really powerful AI technology we want To be a trustworthy company.
00:05:00: I mean you could say going back to The Glasses example It's an American Company doing American things.
00:05:06: We could kind of expect that and scandal going through the media.
00:05:10: However, we want to make sure that size as a company is trustworthy when it comes.
00:05:16: so really one of our values like instill trust in our products for customers.
00:05:21: And thats why we have responsible AI on all these different disciplines at the table which obviously
00:05:29: makes a lot of sense, at least from our point-of-view.
00:05:33: So is it fair to say that the responsible AI office is leading all of AI governance at size?
00:05:42: Yes orchestrating I would say when you're saying leading.
00:05:47: so what we are looking at?
00:05:48: this kind of like federated model ties as company with forty seven thousand employees across the globe and these different segments from Meditech to semiconductor industry.
00:06:04: I don't want to neglect anyone here, right?
00:06:07: So microscopy too industrial quality solutions and so on like also in the consumer markets are very different segments that we're looking at.
00:06:17: We have basically the one central place trying to be of service for everyone.
00:06:25: Because I mean, there are certain things you don't have to do by yourself and reinvent the wheel all over again right?
00:06:32: So yeah that's what we're orchestrating i would put it.
00:06:39: I think its already making the field very broad like theres a lot to do ,a lot opportunities but on the other hand he showed that is necessary in a way because a very good one if you want to understand what trust and responsibility is actually about.
00:06:59: It's not just about one model that I figure out, okay this model should not have any bias in it but it's like a huge system sometimes for example the glasses you mentioned or i don't know again industrial application.
00:07:13: so does that even mean that you have new requirements?
00:07:18: also maybe two other parts beyond The AI part, the model part but also like I don't know to other IT systems that now need be treated a little different because AI comes through game.
00:07:35: Does it make sense?
00:07:36: Is really the case?
00:07:38: Well first thing when i hear you talk about that is thinking about AI security Like other IT system.
00:07:46: have Watch out now, because if you look at the vulnerabilities up to now they all had vulnerabilities.
00:07:54: They were like... How do we say that?
00:08:01: Like in cyber security of a threat surface and there are tiny holes where a manual attacker wouldn't have found so easily.
00:08:10: but these days when it comes from outside using autonomous agentic AI They can move in velocities we haven't seen before, right?
00:08:22: So they just poke holes into your IT.
00:08:26: They can hack you within hours or days.
00:08:29: if you look at the media.
00:08:30: The very recent example was the McKinsey case.
00:08:34: They had just tiny openings where the agents were able to find these very quickly and exploit this, actually get that IP of a company.
00:08:43: So yes other IT systems also at size have to change or update in order to protect our company IP.
00:08:54: Obviously AI governance cannot be thought without data governance.
00:09:02: What's what's your approach on that?
00:09:05: We now see companies trying to bring them together.
00:09:08: They say hey, but cannot be separate.
00:09:09: we see others that treated quite separately.
00:09:15: That is a really great question.
00:09:17: if you're looking at agenti ki and where the real like value generation lies Like If we look at agenda ki like possibility to do process automation like end-to-end not just automating I don't know, your email inbox or something but really look at the in-depth vertical integration of AI systems then it all comes down to data and data quality that can be used.
00:09:42: So yeah we're looking at that from i'd say two perspectives.
00:09:47: one is what kind of um data do we need?
00:09:52: To be able to automate our processes properly?
00:09:56: where is the most value and like?
00:09:59: there's one thing internally.
00:10:03: And the second question is how can we make sure that a data governance, data catalog whatever you need to create like train AI models properly.
00:10:19: That could also be used in products?
00:10:22: Why it's relevant?
00:10:23: because You need to make sure that you have all the copyright on a data, that your data quality is high enough and so on.
00:10:31: Multiple approaches.
00:10:32: we can offer as a service for example in size digital partners where they do their data governance.
00:10:38: but it's also kind of a federated approach which will be needed in future because doing this completely centralizes the company like decentralized structures.
00:10:51: such I don't think that's gonna work in the near future.
00:10:58: But just to ask again, what is your relationship with it?
00:11:02: You said you're an orchestrator.
00:11:04: so data governance and people responsible for data governance... Is this something where basically from AI a government standpoint would look okay if they've done properly or results like data quality are in order?
00:11:17: we are compliant with AI regulations?
00:11:22: Yeah, so we're currently moving into that direction to bring AI and data governance closer together in order to produce the value for ties as a company.
00:11:32: You already mentioned that you are working on very federated approach which means you have an essential body your team?
00:11:40: And a lot happens then decently.
00:11:44: In terms of people you already mention there are few disciplines involved.
00:11:49: How many people are actually working on that kind of goal to have trustworthy or responsible eye in the end?
00:11:57: Do you like an estimation, centrally but also in departments?
00:12:01: probably it's just a fraction of each developer maybe.
00:12:04: But very often idea how much work this really costs here.
00:12:09: That is great question.
00:12:10: I've never done any estimation before.
00:12:13: What I can say is that we're working closely together with like people involved in regulatory affairs, you know when were looking at medical products or something right?
00:12:21: That's a daily job.
00:12:22: Looking at legislation.
00:12:25: We are working together with the information security Responsibles and every segment And they again have like structures.
00:12:32: where have that size?
00:12:36: security engineers, which is a very specific role to actually be in the departments and be very close to people where development happens.
00:12:45: We're also working with data privacy coordinators.
00:12:48: so if I had to give an estimate... ...I think i'd end up around like three hundred to five-hundred people.
00:12:57: That's about the number we have inside security engineers who are going to be trained able to assess the risk that comes with AI.
00:13:11: Interesting, let's maybe shift the viewpoint a little bit towards implementation.
00:13:17: now and what you actually do.
00:13:20: so obviously your orchestrating that but I'm sure you're also implementing certain measures or audits.
00:13:29: basically What is the effect in the organization?
00:13:34: What we actually do is, I think it comes down to you could say four areas.
00:13:42: One is training and awareness in communications And i'm mentioning this as a first because usually In the technical domain This Is The First That You Would Neglect.
00:13:53: But How Can You Expect People To Actually And That Would Come To My Second Area That We Are Working In to use guidelines and policies if they are just a PDF somewhere in your management system, if no one knows about.
00:14:08: So that's where the training information and awareness campaigns come into play.
00:14:12: so networking is spreading the word If you want like via email.
00:14:17: we have events via response when I hop.
00:14:19: um then which has initiated right?
00:14:22: second creating the policies and guidelines along the UI like We know that thing by heart.
00:14:28: Meanwhile, it's like a one hundred forty four pages document plus I don't know how many amendments we are facing right now.
00:14:36: So breaking that down also into one pagers and That can easily be distributed let little tiny kind of pieces Also there can be that are digestible for persons who aren't reading the UI because if you would do that they will probably fall asleep.
00:14:54: so breaking that down, putting it into nice formats writing the policies and guidelines.
00:15:00: The third area is the consultation like you can't expect people from these one-pagers or some high level policies of guidelines to actually be able understand what does that mean?
00:15:14: What do UAI Act says?
00:15:20: as a developer for me, is the designer.
00:15:22: So we put together like example design instructions.
00:15:25: how should you mark your AI generated content so that's clear to say I generate it but also in alignment with size brand and communication guidelines right?
00:15:38: These are very practical things that we do.
00:15:40: And now i think i forgot the fourth area.
00:15:44: It's like the consultations, not just breaking down but also consultation in all sorts of questions.
00:15:50: You can imagine we see still many new cases around size every day where people really want to do the right thing they want.
00:15:59: you know what they have to keep in mind when there starting a pilot use case for their AI system and which kind of data I could use?
00:16:07: In which country can i use it?
00:16:08: how can make sure that this is a responsive approach take to solve my problem?
00:16:16: or customers have questions around like, hey I've heard that there's this paragraph coming into force in a few months.
00:16:24: How are you addressing that?
00:16:26: And then the sales team can just write us an email and we're looking at it for you.
00:16:31: Yeah so thats covering what do plus if theres any lets say AI incident to be handled.
00:16:41: if there's anyone who is not comfortable with, I don't know co-pilot outputs or anyone registered some suspicious activities.
00:16:53: Or whatever you can think of.
00:16:57: and then we're also there to orchestrate the response.
00:17:00: that using The people and processes that we already have, like from privacy or from cybersecurity because they are really the experts in managing these kinds of incidents.
00:17:11: And orchestrating responses escalations if needed.
00:17:15: so that's kind our daily job plus everything on top of that discussing with works council around their concerns.
00:17:24: but there needs when it comes to AI system regulation within Germany for example making sure everyone is aligned.
00:17:34: Interesting, I have an add-on question.
00:17:38: I know size is super big on data products.
00:17:41: now we had Marcus Markner for example also in this podcast but also at events and part of Data Products with the data contracts, to try and automate governance.
00:17:56: To make sure when things are happening that you already have a few basic checks in there for example if your data product contains personal information.
00:18:06: stuff like that is also something you look at with AI Governance?
00:18:13: You see me nodding.
00:18:15: yeah we're looking good understanding.
00:18:23: Now I've been with size like one and a half years now, um And we kind of started this response layer office as the small pilot if you want to know better idea or what it takes from every segment also form center perspectives for their company From a process perspective To be successful in.
00:18:45: in size Like i mean It's not only having these high-level governance slide that you might get in any consultancy, but it's really about getting into the processes that Thais has implemented as a company and then understanding how people work.
00:19:01: And do their everyday work?
00:19:04: Coming back to your data products on automation question... Yes!
00:19:08: That is something we are looking at this year But I have no good answer yet.
00:19:18: in the emerging topic that we see right now, like in the AgenteKi age.
00:19:23: Like agents there will be many, many agents that we don't see them yet but then expect to come into place in twenty-twenty six and they're... We do not want it become kind of a bottleneck with AI compliance checks.
00:19:40: so if you say okay have you checked?
00:19:42: The risk for your agent has been registered.
00:19:47: We are looking into technical solutions as well.
00:19:50: How can that be automated?
00:19:52: how Can we make sure That, we have an overview
00:19:58: Now.
00:19:59: he talked about implementation but also on the conceptual level.
00:20:02: I think there is a few tips maybe or ideas your responsible iProgram to maybe reuse things already existing because I mean AI governance is not the first thing we want to govern in companies obviously.
00:20:20: They have been rose before that.
00:20:21: they are governance oriented, they had processes before technology.
00:20:25: you've already said for example GRC tools data governance tools and so on.
00:20:30: So how did you approach that?
00:20:32: Is there may be a lever to get started more quickly and deliver quicker on AR governance?
00:20:39: Mm-hmm, definitely.
00:20:40: I think that was one of the most brilliant ideas my predecessor like bringing together in the office also data privacy and cybersecurity experts because there you have two examples of.
00:20:54: I mean a data privacy.
00:20:55: when GDPR came into force they had to face similar challenges Like how can we bring them back?
00:21:03: bring data governance, what are the processes for review of approval?
00:21:06: Who's responsible?
00:21:07: right.
00:21:08: So they were really valuable aspiring partners in that like setting up process and we went through.
00:21:17: And on the other hand this cyber security guys.
00:21:20: I'm more like the experts in risk management part as you said Like What's a risk?
00:21:24: That specific way thinking about the scenario, what kind of risks would we have?
00:21:31: Like a structured approach there.
00:21:34: And then assigning that controls but also to risk ownership to someone because ultimately it's not the responsibility.
00:21:41: I office will decide if a risk is acceptable or not because that goes back like to this segment and they're managing directors for example.
00:21:52: We can only point out the risk, but there is acceptance.
00:21:56: The risk appetite that everyone has cannot be decided centrally.
00:22:03: number again in a de-central approach and that's something where the cyber security approach was really helpful in their way of thinking assessing getting clarity here on how we should handle it.
00:22:16: so I think yeah thats answer to that like having these two disciplines at the table, not only with their general approaches but with their established approaches.
00:22:27: that size was extremely valuable to leverage.
00:22:31: I think it's a great recommendation also.
00:22:34: so if your company is not looking this way you have to team up there because makes complete sense.
00:22:41: and then combining data governance in that already gives us very good overall picture.
00:22:48: Talking about overall picture, let's maybe take a more global view on this.
00:22:54: Obviously we are here in Europe having to comply with the EU AI Act but there is regulation all over the world and I would like to understand your view of that.
00:23:07: so how do you handle it?
00:23:10: Keeping track then my understanding Partly, it's even contradicting what is demanded.
00:23:17: I heard a few examples and they were more from the privacy side of how to treat personal data but maybe could give us just an overview here.
00:23:29: so what's the global picture on this?
00:23:33: Yeah, that is a really great question because thats where also the federated approach comes into place again and we have it sized.
00:23:40: For example if you say data privacy and PII That's completely covered If its AI or not AI by our Data Privacy Department And the data privacy coordinators.
00:23:53: So Thats what they're looking at.
00:23:56: The second thing like the cyber security We have an organization that is established globally And so this is our means to also look into global regulation.
00:24:07: What we see right now, it's not as important for India and China because they have already legislation in place when it comes to AI like in China heavily tied to security as well Right?
00:24:19: We keep coming back to that... ...and we are actually trying make sure at its highest we will establish a central AI register.
00:24:28: However, we from Europe as you say cannot be like the approvers of any controls that they implement for a Chinese product example.
00:24:40: So there were establishing kind of process and setup where still the Chinese lawyers will be consulted because their are experts in their field right?
00:24:49: But again orchestrating.
00:24:52: how can Work on this together and make that work so that the products can also be deployed Be assessed end-deployed according to regulations for example in China.
00:25:06: just out of curiosity What are the main differences?
00:25:10: In the regulation.
00:25:11: I'm sure they're not similar.
00:25:15: No, they're no similar.
00:25:16: you mean from china For example.
00:25:18: different.
00:25:19: yeah They are mostly.
00:25:24: I think they are mostly looking at content moderation, like that's the scope of their law.
00:25:30: Like security and content moderation if i'm not mistaken.
00:25:34: And Europe is mostly looking At use cases from a risk-based approach.
00:25:39: What's the risk to general public If it would do this or that using AI?
00:25:45: So you already mentioned that the EU AI Act is a lot about risks, right?
00:25:50: It's oriented on the individual in a way.
00:25:54: How do we think of it as being good to be responsible for AI in general?
00:26:00: or maybe even overlooking parts?
00:26:02: because at the end you also said creating trust internally into AI and from your customers into solutions you're delivering.
00:26:12: so... Is this really full picture?
00:26:15: Sometimes risk feels like it's just part of the picture and looks a bit like legal picture, maybe also off this topic sometimes.
00:26:25: So I'm personally huge fan of the UAI Act on what they try to govern here because as opposed to
00:26:32: U.S.,
00:26:32: where they cannot make up their minds about actually want to regulate or not?
00:26:38: And you're living in the wild west.
00:26:41: all kinds of use cases where you think like, oh my God I don't want that to be done.
00:26:46: To me.
00:26:47: just look at the recent conversation around public surveillance using AI or about autonomous weapons using AI without a human in the loop.
00:26:58: both cases are not acceptable.
00:27:00: and the European Union.
00:27:01: so i'm glad to be here!
00:27:03: And when you look at their prohibitions So they are really in areas, if you want my personal opinion.
00:27:13: We don't have AI systems without human oversight operating like some children's toys that automatically try to make children buy whatever for vulnerable groups.
00:27:35: They are like protected by the UAI Act.
00:27:39: When it comes to high-risk systems, we have different to differentiate a little bit more.
00:27:45: I think they really try at good approach but haven't thought through maybe until the end because of way is written right now and that's why its also on discussion be changed especially when it come medical products really hard for companies to fulfill all the requirements, plus they haven't set the standards yet on how to fulfil their requirement.
00:28:13: So I mean...I really want high quality software when it comes to these high-risk cases, right?
00:28:21: I'm a space engineer.
00:28:22: We are used to quality management also in software development because you only have one chance for that thing to go up and the same is true with medical systems.
00:28:33: however i think there's some edge cases where they haven't sorted through.
00:28:38: if they want to keep some algorithms on the market that have been working for fifteen years, that has a proven track record of being helpful and accurate.
00:28:48: But you can't provide your original data now when high risk requirements come into force.
00:28:53: That would force you take this product off-the-market as an European company And...that's too hard.
00:29:00: so I think it is the right thing at these points, which is also I think a precedent in European legislation that they are adjusting like only less than two years after the law came into force.
00:29:16: So the original answer was yes i liked the regulation and i liked it.
00:29:21: The EU is trying to do the right thing And i think its good that they're also peering this as an iterative process.
00:29:30: Yeah absolutely things are moving so fast on AI.
00:29:34: That makes sense.
00:29:36: Add on question to that.
00:29:37: I mean, there's a lot of discussion whether AI or the AI Act and compliance in regulation is basically slowing down European companies And I think you as a global company have had great view on it.
00:29:50: You already alluded with the topic with the Agentech AI where we said don't want your own company or development.
00:29:59: So what does this take?
00:30:01: How can we be compliant but not too slow.
00:30:08: Oh, I love that question!
00:30:13: So the approach we're taking is... No let me rethink my answer.
00:30:22: The point is i think this hits mostly industries which are not regulated per definition.
00:30:29: So if you are working in a regulated industry like in finance, insurance and the medical technology in space or defense.
00:30:36: If your will then ya already used to aburdened.
00:30:41: You want to phrase it like that of regulations?
00:30:43: And u just have processes when you get kind-of use with this process is kinda quality management system.
00:30:51: That I mean... This how things are.
00:30:54: and now tell me that space not innovative.
00:30:56: right they can still be innovative.
00:30:59: So are medical technology manufacturers?
00:31:04: And I think that, yes for small companies it might be a high burden to get up and running but right now we're at the point where you have to do all kinds of things using AI.
00:31:22: so why not setting quality management systematic process to help them be compliant.
00:31:32: And I also think that the EU is looking at as well.
00:31:34: right in iteration, they're currently doing like for small companies having less burden of proving all their requirements from a UAI act and just being able to develop faster?
00:31:46: Yeah i think thats one part off there interation right now making it bit simpler.
00:31:51: but what you said before might even more important.
00:31:57: sometimes it's a bit vague, like what does that exactly mean?
00:32:00: Like providing transparency or Providing your risk management system and giving like a bit more level of detail how this could look.
00:32:08: So you're on the safe side so to speak.
00:32:11: I think That could also have especially small companies that don't want to figure are spent like enormous amounts Of money from time-to-time To have the conceptual work done And having like a little bit more of a clearer picture How this would look might also have the smaller companies, right?
00:32:32: Yeah.
00:32:33: Definitely
00:32:36: So.
00:32:37: when we are now looking like a bit back to what we discussed in last twenty-thirty minutes I think there was lot of it.
00:32:45: If you would summarize the learnings especially You took away from your journey on responsible AI Are they some key takeaways which she would recommend to others that want The same in a sense.
00:33:01: I mean, the same is obviously not easy to scope but we want you do what also to do progress on AI governance.
00:33:10: who also wants to create trust within their own company and want to ensure that customers don't trust them.
00:33:17: so what are like major tips?
00:33:20: You could give an investment.
00:33:25: That's a really great question, thank you for that.
00:33:27: I think i would tell them to start small like look at one AI product or solution they want implement and start governing from there in the sense of document.
00:33:40: what do see, document questions that your ask like provider or team develop things like Start Small, start with prototype, document everything then iterate like get better over time, right?
00:33:57: You have to start somewhere and sometimes we have the ambition to set it all up at once.
00:34:04: Expanding everything that will not work!
00:34:07: And then find um... The experts in your organization Um.. That can help you think people and processes you have both are in place.
00:34:22: Super interesting.
00:34:23: Thanks so much, Alexandra!
00:34:25: We will be seeing you at the data festival in Munich in June and that would also be a chance for our listeners and viewers here to meet your in person.
00:34:36: maybe dive deeper into this super important and super-interesting topic.
00:34:40: I'm really looking forward to it.
00:34:42: That's why i say see you soon Bye bye
00:34:46: Yeah See You.
00:34:47: Thankyou So Much For Having Me Really Looking Forward To The Data Festival.
00:34:51: Thank you.
00:34:52: Take care.
Neuer Kommentar