Using AI to Understand the Internet of People
This talk discusses how artificial intelligence techniques can be used to process and extract insights from data produced by people. Different projects from the IBM Research Brazil laboratory are used to illustrate the challenges and opportunities of AI in the Internet of People, including aplications in social media text analytics, life-event dectection, and social imagery processing. The talk also explores the use of AI and ML in new finance-related applications, including some results on the use of graph analytics on healthcare management data and a prototype of an intelligent agent for investment advice. The importance of quantitative and ethnographic studies as tools for algorithm discovery and validation in the Internet of People is also highlighted, with some examples from the laboratory’s work on microcredit applications.
Claudio Pinhanez is a researcher, professor, and innovator. He currently leads the Social Data Analytics research group of IBM Research Brazil. he is a researcher of IBM Research since 1999, working on Social Media and Networks, Cognitive Computing, Service Science and Design, Ubiquitous Computing, and Human-Computer Interfaces. Claudio got his PhD. in 1999 from the MIT Media Laboratory, and has been a visiting researcher at the ATR-MIC (Japan) in 1996, and at the Sony Computer Science Laboratory (Japan) in 1998. He has also been an associate professor of the department of Computer Science of the University of São Paulo from 1987 to 1993. @cinhanez
Concept Lattices for Knowledge Discovery and Knowledge Engineering
Knowledge discovery in large and complex datasets is one main topic addressed by “Data Science” and is also a topic of first interest in “Science of Knowledge” (or Artificial Intelligence). Indeed data and knowledge are interacting: knowledge discovery is applied to datasets and has a direct impact on the design of knowledge bases (or ontologies). Accordingly, it could be interesting to have at hand a generic formalism supporting knowledge discovery and, as well, knowledge processing (knowledge representation and reasoning).
In this presentation, we introduce some elements of Formal Concept Analysis (FCA), a mathematical formalism for data and knowledge processing. FCA starts with a binary table composed of objects and attributes and outputs a concept lattice. In a concept lattice, each concept is made of an intent (i.e. the description of the concept in terms of attributes) and an extent (i.e. the objects instances of the concept). Intents and extents are two dual facets of a concept that naturally apply in knowledge representation. Moreover, in some cases, the structure of a concept lattice can be visualized and allows a suggestive interpretation for human agents while being also processable by software agents.
There are two main extensions of FCA, Relational Concept Analysis (RCA) for dealing with relational data and Pattern Structures (PS) for dealing with complex data (numbers, sequences, trees, graphs). We will discuss the capabilities of FCA and its extensions in knowledge discovery and knowledge engineering through various applications, including text mining, information retrieval, biclustering, recommendation, definition mining and discovery of functional dependencies.
Amedeo Napoli is a senior scientist at CNRS in France, with a doctoral degree in Mathematics and an habilitation degree in computer science. He is the scientific leader of the Orpailleur research team at the LORIA Laboratory in Nancy (CNRS – Inria Nancy Grand-Est – Université de Lorraine), which includes roughly 30 members. The main research theme of the team are knowledge discovery and knowledge representation. Amedeo Napoli is a specialist of formal concept analysis and variations (pattern structures and relational concept analysis), pattern mining, and text mining. In parallel, he is interested in description logics, case-based reasoning and classification-based reasoning, in semantic web technologies and especially in ontology engineering. Amedeo Napoli is involved in many research projects at the international and national levels with applications in biology, chemistry, and medicine. He participated in European Projects, French ANR projects, and industrial projects. He was involved in many international collaborations with European countries, Canada, Russia, South of America (Argentina, Brazil, Chile). Regarding scientific animation, he participated as a chair or in the program committee of national and international conferences and workshops. Moreover, he has authored or co-authored more than two hundred publications while he has supervised around 25 PhD Theses.
Interval-valued Fuzzy Sets and Their Applications
Since the introduction of fuzzy sets by Zadeh in 1965, different types of fuzzy sets have been defined, providing different theoretical approaches to the handling of uncertainty. However, in the applied field, the results obtained with them have not always been better than those obtained with type I fuzzy sets. This consideration leads skeptics about these sets to argue as follows: when we use new types of sets, we have almost always to handle more information but the improvement in the results is not proportional to the amount of information that we use. In my opinion, the problem stated in the last item arises from the difficulty to build the best fuzzy set for a given application we are working in. However, in recent years, the development of new techniques to build intervals in order to represent uncertainty and the introduction of a method to build admissible linear orders between intervals by means of aggregation functions have led to the development of applications where the use of interval-valued fuzzy sets provides better results than those obtained with fuzzy sets. We should remark that in the papers where this improvement is shown a comparison to the best fuzzy techniques for the considered problem is always carried out. In particular, I will present the new results obtained using interval-valued fuzzy sets for Classification problems which outperforms two state-of-the-art fuzzy classifiers, namely, the FARC-HD method and the FURIA algorithm.
Image processing. The adaptation of Huang and Wang algorithm to the interval-valued fuzzy setting has allowed proving that for some regions in ultrasound images segmentation is better than the one obtained with the same algorithm making use only of type I fuzzy sets.
Humberto Bustince received his Bs. C. degree on Physics from the Salamanca University, Spain, in 1983 and his Ph.D. degree in Mathematics from the Public University of Navarra, Pamplona, Spain, in 1994. He has been a teacher at the Public University of Navarra since 1991, and he is currently a Full Professor with the Department of Automatics and Computation. He served as subdirector of the Technical School for Industrial Engineering and Telecommunications from 01/01/2003 to 30/10/2008 and he was involved in the implantation of Computer Science courses at the Public University of Navarra. He is currently involved in teaching artificial intelligence for students of computer sciences. Dr. Bustince has authored more than 100 journal papers (Web of Knowledge), and more than 120 contributions to international conferences. He has also been co-author of four books on fuzzy theory and extensions of fuzzy sets.Moreover, he is member of the edotiral committee of IEEE Transactions on Fuzzy Systems, Information Fusion and Fuzzy Sets and Systems. He is also editor-in-chief of Mathware&Soft Computing magazine (EUSFLAT) Since 2015 he is Fellow IFS and Senior IEEE Member