ResearchPod

Fuzzy Logic and the Human Side of Artificial Intelligence

ResearchPod

Artificial intelligence often struggles with the ambiguity, nuance, and shifting context that defines human reasoning. Fuzzy logic offers an alternative, by modelling meaning in degrees rather than absolutes.

In this roundtable episode, ResearchPod speaks with Professors Edy Portmann, Irina Perfilieva, Vilem Novak, Cristina Puente, and José María Alonso about how fuzzy systems capture perception, language, social cues, and uncertainty. 

Their insights contribute to the upcoming FMsquare Foundation booklet on fuzzy logic, exploring the role of uncertainty-aware reasoning in the future of AI.

You can read the previous booklet from this series here: Fuzzy Design-Science Research

You can listen to previous fuzzy podcasts here: fmsquare.org

Todd Beanlands: 00:05-01:07

Hello, I'm Todd. Welcome to ResearchPod. Artificial intelligence has come a long way, from machine learning to large language models. But questions remain about how we can make these systems more transparent, trustworthy, flexible, and ultimately, human-centred. That's where fuzzy logic comes in, as a way of embracing uncertainty and nuance that helps computers understand the world more like we do. This episode is part of a special research pod series on fuzzy logic, created in collaboration with the FM Square Foundation. Through conversations with world-leading experts in the field, we examine how fuzzy systems connect mathematics, meaning, and perception in relation to the upcoming booklet on fuzzy logic. Today, we speak with Professor Edy Portman, co-head of the Humanist Institute at the University of Freiburg, and president of the FM Square Initiative, alongside professors Irina Perfileva, Vilem Novak, Cristina Puente, and Jose Maria Alonso.

Professor Edy Portmann: 01:11-01:34

Thanks for having us here. My name is Edy Portman. I'm a professor of computer science at the University of Fribourg. I lead there an institute that is called Humanist. Humanist means human-centred interaction science and technology. And I'm also the president of initiative called FM Square Initiative. That means Fuzzy Modelling Methods that we apply to create more human-centric technologies.

Professor Irina Perfilieva: 01:35-02:39:

So I can continue here by myself. My name is Irina Perfilievá. I am from Ostrava University, working there as a professor of Applied Mathematics. Actually, my occupation is fully academic. However, our institute, so in parallel with academic style of just making lectures and working with students, we also have research institute called the Institute of Fuzzy Modelling, I mean in short. So this institute actually takes a lot of our activities in research. Mostly the research in the past was focused on fuzzy modelling. However, now we changed directions and now we're mostly involved in various branches of AI, trying to make it transparent for people. This is the main focus. My second job is connected with Poland, with Krakow Technical University. In this sense, it is more research-oriented, and there also we are just trying to activate or to just launch many European projects based on horizon goals.

Professor Vilem Novak: 02:40-03:05

I am also from Ostrava from the Czech Republic, and I work in the same institute, the official name Institute for Research and Applications of Fuzzy Modelling. I am professor of algebra and number theory. Currently, I have only PhD students, and my main focus is research in the theory and applications of fuzzy modelling and fuzzy logic.

Todd Beanlands: 03:06-03:07

Then maybe Cristina?

Professor Cristina Puente: 03:07-03:37

Hi, everybody. My name is Cristina Puente. I am professor of Pontifical Comillas University here in Madrid, Spain. And I belong to the Computer Science Department. I am professor of Programming Languages and I am professor as well of Natural Language Processing, which is my specialty. And my research is focused on NLP, Artificial Intelligence, and Facial Logic, especially in causality.

Professor Jose Maria Alonso: 03:38-04:27

Hi everyone. My name is Jose Maria Alonso. I'm working as a professor in computer science in the University of Santiago de Compostela, even if my background is in telecommunications engineering. And my research work is done in the Research Institute on technical systems related to AI in the University of Santiago. With my background or my main contributions on responsible AI, trustworthy AI, and looking for applications of this, using mainly a fuzzy system from the point of view of transparency, explainability, and so on. And in recent years, we make a lot of emphasis in how to translate information from these systems in natural language. So we are in the borderline between NLP and fuzzy logic, and then AI, looking for assisting humans and looking for ways to have human-AI interaction.

Todd Beanlands: 04:28-04:34

Thank you everyone. Edy would you like to give a summary of what the booklet's about and maybe the purpose of it?

Professor Edy Portmann: 04:35-05:09

So we are creating a booklet where this interview will be a part, although the interview will also be online as audio interview, where you can hear it only and we will translate parts or we will convey parts to texts from this interview that we are going to do just in a moment. The idea is that we create the booklet to make it more understandable why perception-based fuzzing models are an alternative or maybe an enhancement of classical binary AI systems and we want to introduce that to the interested readers.

Todd Beanlands: 05:10-05:35

Brilliant. Thank you. And I guess starting with the first chapter of the booklet, which is on fuzzistics and designing for the unpredictable human, what you've just said is about capturing that imprecision that binaries sometimes can't. Could we start with you, Irina? And would you be able to just tell us a little bit about preserving any ambiguity that humans have and how fuzzistics is a really good method for doing that?

Professor Irina Perfilieva: 05:36-08:09

So I have reformulated the first question as a so-called hidden goal in it, and this is how to preserve ambiguity in fuzzy judgment rather than to reduce it to binary inputs. What I see as a key challenge is how we understand the notion of ambiguity, because ambiguity appears in using different words for notifying similar or even equal notions, called synonyms. Therefore, in order to work here with judgment-making in Word, like if you remember the program called by Lotfi Zadeh Computing with Words, in order to realize this program, we shall make the first agreement which words are taken as very similar, and this means that we will process them in the same way, just unify them into one category, and which words are really different. So this means that their processing will be performed in a different way than we process the first category of assessments. For example, if we use the word big as a word which we use for making assessment, then this word big has a lot of synonyms. For example, large, major, massive, heavy, important, great, whatever. and they can be put together in 2-3-1 cluster of similar words in such a way that they will be processed in the same manner. So this is the important. And if we switch to the type of processing using Physiologic, first of all, we distinguish actually Physiologic as in the mathematical sense or Physiologic the sense when it is used for making computations. And if we consider these categories from mathematical physiologic, then the structure which is useful to be applied to the processing of this type of estimation is known as residuated lattice. So I will not go into the details. However, I just want you to stress that Different to binary logic, which is based on the linear structure of truth values, a residuated lattice is really a partial order structure, which is richer than the structure of binary logic. And therefore, it allows to express with more accents of making assessments of degrees of truth.

Todd Beanlands: 08:15-08:33

To follow that up, you did talk a little bit about phrasing and how certain human phrases sometimes aren't quite represented by truth values of binary code. And I was just wondering, from a statistical modelling perspective, how can we preserve the richness of human language and compromise in that respect?

Professor Irina Perfilieva: 08:34-10:15

So today this statistical processing is a very powerful tool because it is inside neural network decisions, how neural networks make decisions based on statistical learning or statistical decision making. However, we shall understand that also statistics communicates with us using a grid system of estimation, for example, and mostly they are numbers expressed as process, process of just estimation of this or that judgment. And we shall understand that the numbers, they are symbols as well as linguistic categories. The only difference is that numbers are processed using different machinery, for example, arithmetic or algebra. and linguistic categories processed using, again, other machinery. So we shall distinguish. And in most cases, this is a matter of preferences. For example, designers on this or that systems, the language they understand. And then, because of that, they also force us to apply their language. However, we can also continue working in the language which is more appropriate to us. For example, we are using fuzzy logic categories. And then, so I claim that actually this is a matter of retranslating. We can still work with fuzzy logic because it gives us a richer structure of how we can express ourselves. And if somebody requires, we can retranslate to any other language. That's it.

Todd Beanlands: 10:15-10:25

And from like a design perspective, how do you think fuzzy logic can help designers account for unpredictability and nonlinear human behaviour?

Professor Irina Perfilieva: 10:25-12:53

So if we are focused on modelling of human unpredictable behaviour, then we are close to the working with AI systems, because actually they are responsible or they are connected with the activity of our brains. And therefore, what brains produce somehow is imitated by neural networks. And therefore, in order to connect fuzzy logic machinery with neural network computation. Again, rely on what I just claimed before. We shall understand that these tools are only on the formal level are considered as different. However, they have a lot of similar machinery, I mean, when they process data. And actually, now I can refer to the last results, which I have because we have a longer time. I'm working on the fuzzy modelling expressed in terms of the so-called fuzzy transformations. So this fuzzy transformation actually does the job. It connects the problems that are formulated on one language. Mostly this is the language of functional analysis. And then from this language, the problem is formulated into the language of fuzzy modelling things. And then now it is possible to connect also the machinery of fuzzy transforms with the way how neural network computers or how it just obtain outputs from unknown inputs. And this is the matter of statistical learning. So once again, to summarize, this claim is that mostly the different way of processing, it is a matter of taste and we should not forget about deep theory which was developed during the years where Physiologica in a broader sense was on the top of the research community. So I think that we can continue in that direction being acquainted that the ideas and tools and achievements that were really elaborated in that field can be re-translated to the new demands of the society which prefers to use the tools connected with neural network computation. That's my answer.

Todd Beanlands: 12:54-13:27

Thank you very much. I think we'll leave it there for Chapter 1. Thank you very much for your responses. If we could move on to Chapter 2 and Chapter U, Willem. Chapter 2 is on using fuzzy logic as engineering intelligence. So perhaps we could start with membership functions and degrees of membership, like being in a room that's 70% comfortable or 30% warm. And we're wondering from a modelling standpoint, how do we determine or validate these membership functions in real world engineering systems?

Professor Vilem Novak: 13:27-15:51

Okay. I will start with a general claim that physics theory provides a mathematical model of the awakenings phenomena. And vagueness phenomenon is inherent in linguistic semantics, in the semantics of natural language, and also, which is related to that, in common sense, human reasoning. Let me say, what does it mean, semantics? It occurs when we try to characterize a grouping of objects which have some property which is imprecisely given. For example, to be warm, to be hot, to be fine, and so on. Here objects are safe, various degrees of temperature, but to classify all objects having the given property, we realized that it cannot be modelled by a set. It is simply not a set. And therefore, we suggest to model it by a fuzzy set. In other words, by the main idea of fuzzy set is that instead of saying that a given element has a property, it suggests that it has this property in some degree, for example, 0.3, 0.5, and so on. And the meaning of words or expressions of natural language are in fact names of so-called intentions. Intention formally can be modelled by a function from the set of contexts, possible words, into a set of all fuzzy sets. And if we specify one concrete context, then what we obtain is an extension, and this extension is a fuzzy set. And shape of this fuzzy set is determined by logical and linguistic characteristics of the given expression. For example, what Irina was saying about big, this fuzzy set has typically shape like that, something like that. I have it on my slides, which you cannot see. And this shape follows from the analysis of the meaning of such expression. So my general answer is, if we begin with the, what the given expression, meaning such an expression has. and we characterize its properties, then we come naturally after specification of a given possible world or given context to the corresponding shape and we can deal with it.

Todd Beanlands: 15:51-16:06

Thank you. Related to that, fuzzy models are often praised for being transparent and interpretable, unlike black box thinking. And I was wondering, how do you balance this explainability with scalability when it comes to engineering intelligence?

Professor Vilem Novak: 16:07-17:27

Well, this is related, well, there are two things. Explainability, in my opinion, fuzzilogy can help very much by its ability to model the meaning of expressions of natural language. There is a theory of special class of expressions, they are called iterative linguistic expressions, which have typical trichotomies, small, medium, and big, and all words which are related to that. So this is the second thing, the first thing. And the second thing is the so-called curse of dimensionality. It is a problem and all models are facing this curse of dimensionality. In case we use fuzzy models, which are consisting of fuzzy even rules, the solution is to split such a, I call it linguistic description, into a hierarchical system of linguistic descriptions. And each member of this hierarchy models a specific phenomenon of the system, which can be explained, which characterizes the given area of which this sub-model in this case is representing. And the whole hierarchy then provides a global view of the whole model and provides the well interpretable and explainable way what our fuzzy model is doing.

Todd Beanlands: 17:28-17:40

Another criticism of fuzzy logic is associating it with soft science. And I was wondering how would you yourself respond to that misconception? And what do you think engineering is misunderstood for in this respect?

Professor Vilem Novak: 17:41-18:25

I think that this misunderstanding comes again from simply that people don't realize that physiologic is a model of vagueness inherent in the meaning of linguistic expressions. And in general, I can say that fuzzy logic is a highly sophisticated and complicated mathematical theory that has nothing in common with soft science. Fuzzy logic and fuzzy modelling, they apply deep results in algebra, set theory, formal set theory, mathematical logic, functional analysis, numerical analysis, and some other mathematical theories. And so we can say that fuzzy logic provides a model of imprecision, but its theory is rigorous and precise.

Todd Beanlands: 18:30-18:51

Can we move on to chapter three with you, Christina? And this chapter uses fuzzy rule systems to model layered social cues. So for example, if the smile is warm and gaze is steady, then interest is likely. In your own work, how do you define and validate casual relationships in such perceptual non-binary domains?

Professor Cristina Puente: 18:52-21:22

Well, this is very complicated. I totally agree with Bill and Irina that this fuzzy theory lies under very well solid mathematical theory. Because it's very difficult when you have to validate that the smile is bright, or that person is happy, or that there's a friction between two people, how to translate that into a machine. You know, the language itself is very blurry and fuzzy. It depends on people. It depends on context. It depends on geographical locations. It depends on many things, even the moods of who is speaking. For instance, I'm a Spanish speaker, as Jose Maria, and we don't speak the same accent. Many of our expressions are different, and their meanings are different. So we need a very solid framework. to get all this information because there's lots of very difficult variables to model. When we deal with, for instance, causality, when you say a deterministic rule like if A causes B, that is very easy because it cause and effect, so it's direct. But real life is not like that. You know, it's not a deterministic process. It depends on context. So you need a very flexible framework to model all these variables, all these perceptions, all these feelings, because maybe for me, it has a feeling, but for the person in front of me, it has another completely different. So it's like getting into the computer, a negotiation process that lies in lots of interviews, lots of information, but most important, I will say, it needs a lot of feedback. OK, so get the feeling of how to model, for instance, a smile. If it's positive, it's negative. You cannot model it. If it smiles, it's great. If it doesn't smile, it's boring. No, because There has to be a range of grades because maybe that person is sad and he doesn't like to smile, or maybe he's very happy and his smile is like this. So you need a very solid framework, very flexible, to get all these facets and possibilities to model all this imprecision and all these situations.

Todd Beanlands: 21:23-21:40

Talking of feedback and sort of flexibility, with feedback loops, they're quite central to flirtation and to many forms of social behaviour. And how does fuzzy casual modelling capture these dynamic evolving interactions? And do you think machines can ever learn to adapt in real time the way that humans do?

Professor Cristina Puente: 21:41-23:40

Well, I think you got one of the most difficult topics to model, because flirtation, it depends on, it's so personal. What I like for flirtation may not like the people. The computer has to establish like a negotiation process with that people to get what he likes or what he doesn't like. So all the parameters have to be continuously changing or remodelling, because it has to be very flexible to get all these causes and consequences. In fuzzy logic, Lotfi Sade named what he called multi-causality, fuzzy multi-causality. It's very difficult, and in this case it's the same, because it might depend on many factors. In this case, it may depend on the context, the person, the language, the geographical area, the age of the person, the gender, the mood. So, how to deal with all this in causality? As Vilena said, you have to take into account the language, because the language is imprecise. So you have to get the imprecision as a variable to model this type of causality, because it's not deterministic. It's not A causes B. It's A causes B with an uncertain degree. So this fact doesn't provoke this other fact. So the modelling of this type of situation, especially that it's very, very complex. I think when I saw it, I think it's the most difficult situation you can find, or one of the most, is to negotiate with the people, the person that is in the system. It's like an evaluation process, and that is how you will be able to model all these situations.

Todd Beanlands: 23:41-23:54

Thank you. And sort of in relation to that, and you may have already answered this a little bit, but I was wondering what do you think the role of language is in fuzzy modelling and how do we avoid misrepresenting meaning when translating it into computational rules?

Professor Cristina Puente: 23:55-25:32

Language is imprecise. I think the best way to model a language is by using fuzzy logic because it provides a whole framework to model imprecision, and you have to play with that. And when I say I'm happy, as they previously said, what is happiness to me? Because maybe I'm happy, but maybe any other person is a little sad with my happiness, you know? So language is very imprecise, taking into account that language, for instance, you can use irony, and it's very difficult to model, because irony is a kind of humour. And maybe what is very funny for me with irony is absurd for other people. And it depends on the geographical area because, as I said, I'm a Spanish speaker. I live in Spain. If I live in Mexico, even being Spanish might be completely difficult for situations. The words might be difficult. So how do you deal with that language? And the situation might be completely different. So you cannot put crisp values to language because I think it's out of context. It's very difficult to model language with this rigidness of crisp values. You need fussy sets. to make it as more flexible as possible to deal with all these complex situations because in real life you have lots of situations and you have to model it with this language that may vary in a completely different way from one place to another.

Todd Beanlands: 25:32-25:42

Brilliant, thank you. And just to follow that up really quickly, what do you think are the comparative risks between using a classical model and a fuzzy model?

Professor Cristina Puente: 25:42-26:44

I think a classical model for flipping is out of context. Look, I don't know there, but here in Spain, we have a TV show that is called First Dates. You have like the rules that you may follow to like your date. And it's useless because we have many different things. People are different from others. So if you go with a list of rules and saying, I have to do this and this and this, it might end up in a good result. it doesn't get you a success factor, because you have to adapt to the situation. And that's what FacilLogic allows you, to adapt your system to the situation, to adapt your computer to the people who is trying to flip. with a computer, trying to interact with that computer. And that's the best way to do it. Because as I said, the real world is not precise. It's very imprecise. So you have to get a framework that allows you to deal with that imprecision in the best possible way.

Todd Beanlands: 26:45-27:14

Fantastic. Thank you. Moving on to the next chapter, which is on ambiguity, trust, and design of explainable machines. Can we turn to you, Jose Maria? And starting off, this chapter explores a shift from decision-making to deferment, where the machine doesn't aim to resolve uncertainty, but reflects it back meaningfully. In your own work with explainable AI, how do you see this kind of hesitant intelligence changing how we design human-AI interaction?

Professor Jose Maria Alonso: 27:14-30:15

Thank you very much for this nice and challenging question. First of all, I agree with many of the comments and vantage highlighted about fuzzy logic, the mathematician point of view with Irina and Billian and more computer science view with Cristina. I would like to say that nowadays people is talking a lot about artificial intelligence. And when we are talking about artificial intelligence, this is becoming a multidisciplinary field. So we need people coming from mathematics, engineering, linguistics, but also people from psychology, social science, and so on. And why I'm emphasizing this in a way to answer your question, because one of the main challenges that we have nowadays is that people associate artificial intelligence with language models. These language models, unfortunately, they are not trained with fashion logic. They are trained with statistics, with correlations. So it's really hard that this type of system, because of the way how they are trained with a huge amount of data, but without any knowledge engineering or any logic behind, any formal model behind, they can really appreciate causality, as Cristina was discussing before, or this hesitant information, or whatever. And for me, the main problem is that people interacting with this type of systems, they have the feeling or the illusion that they are very good, very intelligent because they are replying in natural language, but they are not aware of how the system work from a technical point of view. So that means that they are not aware of the limitations of this type of systems. And then that could create high expectations in what you expect from this language models of this AI system. So one of the main problems is that we need to learn more from humans. If I'm asking questions to a human, I know that not everyone knows everything. So sometimes I can give you an answer. Sometimes I can say, sorry, I'm not sure. I don't know how to reply to this. I need more information. I don't have all the contextual information to make an answer that could be good for this. But these LLMs, they are always giving us an answer, an answer that seems to be appealing, that is attracting our attention. And they don't say never, I don't know. So this is something that people understand that is a strong limitation. I know there are people working on that, trying to teach the machine to try to communicate when they know a way they don't know and learn to say I don't know sometimes. And this is related to something that people call uncertainty quantification and how models can convey to users the level of uncertainty that they have, in the sense that they can be very, very, very sure in their answer, or they can have a lot of ambiguity in their answer. As we have in natural language, they were discussing before, ambiguity is natural, how we can compute and how can make the perception of this ambiguity to be communicated to humans. And here, as you know, we have a big gap, and we can use facet technology as the way to try to help the way how we teach the machine to manage these type of things. But current systems, they are not able to manage yet.

Todd Beanlands: 30:16-30:37

Thank you. That's really interesting. And the uncertainty quantification, I think that's really interesting as well. When building AI systems, how do you incorporate that sort of interpretive feedback loop where the system doesn't just give answers but evolves its suggestions based on the ongoing input?

Professor Jose Maria Alonso: 30:38-34:02

Here, there's another interesting issue or another misconception in general is that historically, facet set on system is not something that comes new yesterday. It's something from the 1960s when these concepts were formalized. So that way we have a strong, solid background on that. But if you look to the history, the first system, those that were very successful, they were expert systems. There was some expert designer that were defining all the mathematical logic that was behind. Nowadays, however, we are living in the age of artificial intelligence. Let's say the age of deep learning, because actually what people call AI are these big neural networks where there are no humans in the loop, where the idea is just only to give data to the machine. So we understand that if we want really to make an effective design, we need the humans in one way. And to incorporate the humans in the loop and to have this communication with the humans, we need to capture the perception of humans and to understand the feedback given by humans. But if the current machine, as they are defined, they are not able to understand properly this feedback given by humans, we need to add something else to the machine. And here again, I think that fuzzy logic can help. But the big challenge is how we can combine this expert knowledge that a human could formalize with this knowledge that emerged from data that we need to formalize in a way that is sounded mathematically with facet set, with facet information, with all these ways to represent linguistic information in a way that the machines can understand. If this is done in the proper way, then the feedback that we have will really help the machines. For example, one of the things that we do is that we give some examples. What people do is using problem engineering, giving a list of examples to the machines to try to understand better how to proceed. But how to do this in a way that is natural? How we can make an interactive dialogue? In this interactive dialogue, if we were responding to humans, we need a calibration. As my colleagues already emphasized, we need to understand each other. So it's not just enough with a common language. We need a common ground, a common context to understand in at least a very close way, the same concepts of the same context. If this is not the case, we can use the same language, Spanish, Spanish, English, English, check, check. But if I'm a clinician and you are a computer scientist or a mathematician, probably we are using the same linguistic terms, but we understand totally different things. And then there's no way to fill this gap. So again, there's no way to give the feedback. Also, it's important to consider, for example, how language models were trained. They were trained and making conversations, giving some punishment or reward, and then retraining the models. And this is something that we have to be careful because we cannot just change the information because it's directly on one human. We need to think about when it's possible or not to update the knowledge of the system, because other way we can suffer catastrophic forgetting or any other effects that are also negative. For example, we know that people try to fool system, dialogue system. So you try to make some question that is hard for the system. So if this knowledge were incorporated dynamically without any type of guardrail or a way to verify that it's good enough, we have the risk that our system became biased and began to use a language that is not appropriate and could have a very negative effect. Yeah.

Todd Beanlands: 34:02-34:13

And when building your models to handle overlapping or conflicting informational cues, how do you build them to decide when to act or when to wait, for example?

Professor Jose Maria Alonso: 34:14-37:55

What we try to do is to do systems that are robust in the sense that they have different alternative ways to answer, for example. So we have a knowledge that was provided by expert, knowledge that could come with information retrieval for several documents. It's like having different opinions of different experts, but it's already formalized and given to them all. So we try to check and to verify when someone make a query if we are able to answer to the query or not. To decide if we can answer to the query means that we are going to look to the resources that we have that are supposed to be curated, validated by some human that know about the topic. And if we don't have information that is deemed to be ranked as relevant, then our reply is, sorry, I have no information. I cannot answer to you. If this is not the case, if I have some source of information that I can use, then I try to elaborate an answer. Usually, we first make an answer to the question that is, if possible, template-based. That can sound very robotic, old-fashioned, it's not just a language model, but we know that it's trustful, that it's following the facts and can be verified. And then maybe we can use language models to rephrase, to make it more natural for humans. But we have the chance to verify that the final piece of information that we are going to communicate is supported by some fact that we already have in a knowledge base or whatever. So that means that we use in our applications pre-trained models that are for general purpose. We fine tune or adapt this model for very specific applications because we believe that we cannot have a machine that speak about everything, but we can do very good machines speaking for specific topics. And then we can train those machines on those topics and verify that they are talking about things that are verified by the expert. So we can have pilot, we can have expert talking to the machine, and when we are sure that this is working, then we can pass to communicate with the regular public. And I think this is important because it's not just to give the dialogue system to communicate with everyone, but try to go, take your time, make evaluations, validate your system. And in my opinion, current language models, they have a lot of marketing. They promise many things that are not very well validated. They are already there on the shelf for everyone to use without having been tested properly with time. So for me, this is very risky. Probably from an ethical point of view is not the best, but imagine if the people in pharmacy industry or medicine, they were doing this for drugs or pills or whatever. we want that they are validated. We don't want just to try and see what are the side effects. So here we should be careful and when we are making talks about responsible AI and how we can use fuzzy logic in this context and so on, We emphasize a lot on education of people, in telling people these are the pros and cons, advantages, disadvantages, and be aware of the risk of using this technology. Because otherwise, maybe in 20 years, we have problems like we have with social network, problems of addiction, problems that maybe we cannot imagine nowadays, but we should try to keep this under control, at least at designers, because our profile is in design and technology. So then, in order to be sure that users are using this in a safe way, I'm not interested in the perspective of marketing companies making money. I know that to make this design feasible, we need money. But the driving force should be only gaining money. It should be to make a system that is robust and that is helping humans and not manipulating you.

Todd Beanlands: 37:55-38:21

Thank you very much for your responses. And finally, can we move on to you, Edy and talk about meaning machines and the logic of human thought. So we often think of AI as a pursuit of precision, but perceptual computing accepts and even invites imprecision. So how do you reconcile this with the drive for accuracy in most tech systems? And what's the value of designing systems that say probably instead of definitely?

Professor Edy Portmann: 38:22-41:40

I guess I will present here the last, to be holistic, the last viewpoint on that. We had a mathematical viewpoint. We had also a computer science and engineering viewpoint. I am also a computer scientist, but I'm also, I would say, a designer, and I'm working also with companies because I have to convince them. I'm connected a lot to business because I'm building right now something that is called resilient systems. And as artificial intelligence, because resilient systems are a kind of artificial intelligence, this is also a field that is totally inter or transdisciplinary. That means we are working with psychologists, sociologists, but also engineers. You have resilience there, you have it everywhere. And this is maybe a last viewpoint. I tried to integrate, because this is the last chapter that we are going to discuss about, and I tried to integrate or to wrap up somehow what was discussed, because I do not want to repeat. I think I value all the answers that you were able to gather. So, I would like to repeat or to close that up and I think your question about like going more in approximation than being precise can best be summarized by a British logician that lived about 100 years ago. His name was Carweth Read and he said it's better to be roughly right than precisely wrong. And I think that is exactly what also Lotfi Sade said when he created or he came up with something that he calls the principle of incompatibility that states that as complexity of a system increases, our ability to make precise statements that are also meaningful and significant about the system that diminishes over time. So the problem with that, and this was done also in physics before, for example Pierre Duhem was researching that, he said like, sometimes you cannot be that precise that you wish to be. For example, if you ask me, Edy how tall are you? I would say like, I'm 1 meter 75. And then you ask me back, hey Edy, are you sure about that? Then I start to doubt. Am I really 175 or am I 75.5, 75.6? I cannot say it that sure anymore. And what I want to repeat here or to reply in a nutshell is, sometimes it's good enough to have the knowledge as a summarization that we should deal with and not go into more precise data because more precision, sometimes it's not even feasible to get. If it's feasible, it also varies cost because we have to get this data. We need more data to learn, for example, big data, machine learning, large language model. And also the more data we need, the more energy we need, First, to have the sensors to get the data, and second, also, to process the data, we also need more energy. And I think there is even one more problem also with that, when we think of society, that I'm thinking in resilient systems, the more data this big corporation or states need, the more they spy on people, and the more they spy on people, the less privacy preserving you are living in an environment. So I think we should definitely move somewhere where we somehow look more into how human brain is working and human brain is more a filter than data processing machine. So we try to filter away what is not necessary for the moment and maybe we should scale or learn from that and build more technology based on that.

Todd Beanlands: 41:40-42:05

Thank you. Again, the question I'm about to ask, and perhaps you can give sort of a general overview of this question because we may have touched on it before, but going back to Lofty Zadeh, he introduces the idea of fuzzy granules. So categories like young or cold that we use all the time in conversation. And what does it take for a machine to compute with that kind of language? And how close are we to that goal with the systems that we have today?

Professor Edy Portmann: 42:07-44:21

Also here, I do not want to repeat what the others already said, because we discussed about that. We could say like that is we are using linguistic term to create linguistic summarization. But what I think is the main behind here, now coming from another perspective, looking at this is kind of like that we should move from a measurement-based system to a perception-based system to include language. Irina already talked about computing with words and perception. So that means if you want to talk about how you perceive the world, you need language and language is imprecise and also your perception is imprecise. I could best visualize that with, for example, we use the word stop and go. If you're in traffic in a city, we are in stop and go. A machine, an exact machine to define the rules, it says, for example, how many kilometres an hour do you drive right now? 15, 20? 12 again now 18 and this is somehow stop and go but we do have this definition of what it is but we humans we can use the preserving word stop and go and we do not need to go into this exact definition of what we mean by that we can do that with fuzzy sets where something is belonging to the set more or less. So we can say like, how fast are you driving? Instead of saying like, either you are driving fast or stop and go traffic, or you are stopped, we can say like, how fast are you? So instead of asking also, maybe if you translate that to artificial intelligence, is something intelligent or not? we can ask how intelligence is something. Or we also talked about touched on ethics. We do not ask maybe is something ethically correct or not, but we ask how ethical it is. So we move from a yes or no, this binary world that we also talked about into a world where we have grace of shade, where we say it could be more or less. And then we have something that we call alpha cuts or cuts, where we say we need to cross this border that we can say this is now a safe technology that we may use in this or that respect, instead of going like, either you have to be 100% sure or not, because to be 100% sure is what I tried to address with my first answer. It's not easy to come up with technology that can do that, because we cannot even do it in physics and in reality.

Todd Beanlands: 44:22-44:41

Thank you. Moving on to the final question, you've worked extensively on perception-based systems from time series modelling to public service application. What kinds of real-world problems do you think perceptual computing is uniquely suited to solve and problems that conventional AI just can't handle well?

Professor Edy Portmann: 44:41-46:56

In machine learning, for statistics, you need millions of data points to understand something, to learn something, And human beings, they can see one data point and they already are able to recognize it is. Although it's only approximated, they know what a cat is and they don't need millions of cat pictures to understand what a cat is in the end. So I think instead of having this brute force attack with statistics to understand what something is, we should develop and enhance our understanding maybe of biology, how nature is doing that. And I think that is also what Sathe was famously saying, like, if you have a hammer, everything looks like an ale. And he said, like, a lot of statisticians and AI researchers, they have right now the hammers, and they think every problem is now an ale. But what I would argue for is we should also look at other problems and maybe other solutions that are coming from nature. We already talked about cybernetic feedback loops or systematic feedback loops. We learn with the system, not totally independent from the system. So I think we should look and stress somehow what the systems or what nature is doing and include that also in the system. And based on mathematical theory, I also believe that we should enhance like no SQL in databases. So SQL means structured query language and no SQL means not only structured query language. So structured query language is based on relational algebra and no SQL means that we still use this, but we do not only use this. So not only SQL. And I think in the same paradigm, we should enhance This probability means that we have right now that we need statistics to predict like this large language model do. We also need to explore other possibilities as, for example, the possibility theory as an enhancement of the probability theory. And I think this is something where I see a lot of research that we still can scale and improve the system so that the system can be closer and connect better with human beings instead of being an alien technology that somehow mimics that it's intelligent, but in fact, it's not intelligent.

Todd Beanlands: 46:56-47:10

Thank you, Edy. I just noticed that we're coming up to about an hour now, so it's probably a good place to stop. But before we do, I was wondering if you have any concluding remarks perhaps about the topics covered in this booklet and perhaps your purpose for it.

Professor Edy Portmann: 47:11-48:46

So maybe the others can add on because the booklet is about how we could use this fuzzy modelling to even understand better what does language mean, the meaning, the semantics, but also what does it mean, for example, of us humans to perceive. And we do not perceive like yes or no, we perceive in a bandwidth and we could use fuzzy modelling to do that better. And that's somehow the idea of the book to make it for readers understandable. It is on a not so high level in mathematics that we go in depth as we just discussed here. The idea is really that the reader can see the benefits when we are talking about human beings, what is special about human beings and how can we somehow create better interfaces between technology and human beings. And maybe I can sum it up with something that is coming from robotics, as Jose also talked about, a field that is called embodiment. So in the embodiment field, the people say like, you should have a body to become intelligent. So you cannot do that without the body. only a program, maybe this is not real intelligence. And I think in embodiment, we need better sensors that can understand nature in another way than what technology is doing this day with radar or lidar. So maybe we should move from a measurement where you say, like, that's a meter, that's a litre, that's this and that, to a perception-based, like, how far are you? How warm it is? And we need structures that we can integrate this, also that we create more intelligent system. And I think that the movement or the integration of facet modelling into this can help to go this direction.

Professor Jose Maria Alonso: 48:52-50:18

If I cannot remark, I want to say once again, thanks for the invitation to take part in this book, but I really appreciate the initiative because I think this is something that we really need to let people outside of the fuzzy community that they don't know about the advantages and disadvantages of this technology, how this can help in the development of AI nowadays. I emphasize this fact because I was teaching in summer school for artificial intelligence engineers and they know a lot of mathematics and statistics but they were not aware even that fuzzy logic exists. So it's difficult that they incorporate this type of things if they don't have resources like this to try to explain them which are the advantages and how we can use it. Because remember, we have computer science based in Boolean algebra. Boolean algebra is 0 or 1. This is the basic of computers. Everything is 0 or 1. This is totally against the principle of fuzzy logic, where we have degrees. We have something that is in the middle. So we need people to know that there is another way to do And it's not only probabilities. I fully agree with Edy that we need to see possibilistic theories in combination with possibility and probability theory. Because sometimes you talk to people working in probabilities and they have the feeling that we are competing. That one thing is fuzzy logic, other thing is probability. Both are ways to deal with uncertainty. But I think it's not the case. I strongly believe that there is a gap and there is a way to try to combine and to go a step ahead. That's my view, my final remark.

Professor Vilem Novak: 50:19-50:32

I like one small comment. Fuzzy logic is about truth, not probability and not possibility, but truth. We are speaking about truth values that a given object has a property or not.

Professor Irina Perfilieva: 50:32-52:01

However, physiologic also is taken at least in two possibilities. So one is mathematical physiologic, and this is what I am speaking. And another is physiologic in a broader sense, which allows us to make assessment, to estimate the quality of the obtained results. And Jose was speaking about probably the second possibility. What I wanted also to add is just remark that what we are observing, at least in this situation with AI and neural networks, is exactly what Lotfi predicted, and also Edy cited this. When the system is becoming more and more complicated, then we are not able to characterize its behaviour in precise words. What does that mean? If we focus on a lot of books, textbooks, that characterize us, the computation way of neural networks, So in all these books there are a lot of information about details. So we know how a particular perceptron works. However, we are not able to put these local things together and therefore we are facing the situation of explainability. So explaining one step is not enough to explain the behaviour of the whole system. And in this respect, probably, Physiologic, in a broader sense, helps us to discover this gap. So that's my message.

Todd Beanlands: 52:01-52-06

Thank you very much. And thank you to everyone for taking part today. It's very much appreciated.