Or search the timeline for more details.

Jubilação Prof. José Tribolet

Framing Enterprise Engineering within General System's Theory: Perspectives of a Human Centered Future

access_time September 23, 2019 at 05:00PM
place Centro de Congressos IST
face Prof. José Tribolet

Leveraging Existing Technologies to Improve Large-Scale Recommender Systems

Recommender Systems have as goal providing valuable recommendations to its users. Most research on Recommender Systems aims to improve the recommendation quality exclusively, overlooking the computational efficiency of such solutions. Although Recommender Systems, based on collaborative filtering, do not have many ratings available in this work by strategically removing redundant ratings is able to offer a similarity metric with an improved the computational efficiency, for Recommender Systems. This work focus on improving the computational efficiency of similarity metrics and enhance quality, using two different approaches. The first relies on Collaborative Filtering i.e. exclusively on ratings, to produce a computationally efficient similarity metric for Recommender Systems. The second approach uses contextual information regarding users and items to further improve recommendation quality, while still maintaining the same computational efficiency. The solutions here proposed can be readily deployed on real Recommender Systems. The first approach methodology intends to improve similarity metrics for Memory-based Collaborative Filtering using Fuzzy Models. Memory-based Collaborative Filtering relies heavily on similarity metrics to produce recommendations. Fuzzy Fingerprints are used as a framework to create a novel similarity metric, providing a fast and effective solution. The second approach also uses Fuzzy Fingerprints, and it combines contextual information with ratings into a single Fuzzy Fingerprint, or create a multi-context Fuzzy Fingerprint where each contextual information has its own Fuzzy Fingerprint. Each contextual Fuzzy Fingerprint allows a ranking fusion algorithm to produce similarity values. This work is validated using four well-known datasets which are ML-1M, HetRec 2011, Netflix and Jester. The application of the Fuzzy Fingerprint similarity improves recommendation quality but, more importantly, requires four times less computational resources than current solutions on large datasets. When using contextual information the recommendation quality improves either by combining contextual information and ratings into a single Fuzzy Fingerprint or by using multi-context Fuzzy Fingerprints. The solutions using contextual information achieve further recommendation quality improvements while maintaining a comparable computational efficiency in comparison well-known similarity metrics.

access_time September 05, 2019 at 02:00PM
place Anfiteatro PA-3 (Piso -1, Pavilhão de Matemática), IST, Alameda
local_offer Doctoral exam
person Candidate: André Filipe Caldaça da Silva Carvalho
supervisor_account Advisor: Prof. Pável Pereira Calado / Prof. João Paulo Baptista de Carvalho

PerGUID: Personality-Based Graphical User Interface Design Guidelines

Individual differences play a major role in human-computer interaction. In particular, personality shapes how we process and act on the world, and how users perceive and accept technology. Nevertheless, there is limited evidence on the impact of different personality types in graphical user interface (GUI). Moreover, there is limited work on implicit personality assessment from biofeedback. To approach these issues, we propose the study and inclusion of psychological variables from the Five-Factor Model and the Locus of Control in GUI design to provide a better user experience (UX) with a personality-based adaptive GUI which detects psychological traits from biofeedback. Participants (N=100) will use scales NEO Personality Inventory Revised (NEO PI-R) and Levenson Multidimensional Locus of Control (LMLoC) for psychological evaluation, System Usability Scale (SUS), Technology Acceptance Model 3 (TAM3), and NASA Task Load Index (NASA-TLX) for UX assessment, and a Bitalino for biofeedback acquisition. A personality-based adaptive brain-computer interface carries the promise to improve UX design techniques and personality classification based on biofeedback by allowing designers better understand their audience while taking advantage of physiological computing to implicitly collect users’ psychological constructs.

access_time July 18, 2019 at 02:00PM
place Sala 0.19, Pavilhão Informática II, IST, Alameda
local_offer CAT exam
person Candidate: Tomás Almeida e Silva Martins Alves
supervisor_account Advisor: Prof.ª Sandra Gama / Prof. Daniel Gonçalves / Prof.ª Joana Calado

Complex networks analysis from an edge perspective

If we observe our daily lives and the systems in which we participate carefully, it is easy to see that everything is somehow connected. From species evolution to social relations, passing through all the supply chain systems we know, networks portray the simplest representation of these systems. Notwithstanding this simplicity, these networks often underlie complex dynamics. Species and populations evolution are subject to many complex interactions, and individuals states -- from individual choices, epidemic states, strategic behaviors, opinions, among others -- are influenced by social ties and by the overall topology of interaction. These networks, called complex networks, show a prevalence of certain features, which are shared between completely different systems, thus defying the limits of the traditional techniques of analysis and intriguing the research community. In this thesis we aim to contribute to the study of the relationship between structure and dynamics of these complex networks. Usually, the main approach to study complex networks is centered on the importance of nodes. However, it is our understanding that the edge-perspective analysis also provides fundamental and complementary information on the structure and behavior of complex networks. Given this, throughout this dissertation we approach complex networks under an edge perspective, centering our attention in the properties of the edges. In our contributions we provide new metrics, models and computational tools. We start by contributing with a new edge centrality measure. Next, we focus on analyzing local patterns/subgraphs whose edges contain informative labels, highlighting that sometimes observing only nodes and edges, individually, is not enough to fully understand the dynamics and/or the structure of a system. Finally, we observe that often representing a system with a single network is insufficient to reproduce its behavior, being necessary to consider networks at multiple scales, i.e. networks of networks. Our contribution in this subject is a new computational tool that allows us to model and simulate a system represented as a network of networks.

access_time July 02, 2019 at 02:30PM
place Sala 4.41, 2.º Piso do Pavilhão de Civil, IST, Alameda
local_offer Doctoral exam
person Candidate: Andreia Sofia Monteiro Teixeira
supervisor_account Advisor: Prof. Alexandre Paulo Lourenço Francisco / Prof. Francisco João Duarte Cordeiro

Software-Defined Systems for Network-Aware Service Composition and Workflow Placement

Composing and scheduling workflows at Internet scale require communication and coordination across various services in heterogeneous execution environments - from data centers and clouds to the edge environments operated by multiple service providers. Services are diverse and inclusive of several variants such as web services, network services, and data services. Service description standards and protocols focus on interoperability across service interfaces, to enable workflows spanning various service providers. Nevertheless, in practice, standardization of the interfaces remains mostly limited. Furthermore, efficient resource provisioning for workflows of several users from multiple infrastructures requires collaboration and cooperation of the infrastructure providers. The current approaches are limited in scalability and optimality in efficiently provisioning resources for user workflows spanning numerous infrastructure providers. Network Softwarization revolutionizes the network landscape in various stages, from building, incrementally deploying, and maintaining the environment. Software-Defined Networking (SDN) and Network Functions Virtualization (NFV) are two core tenets of network softwarization. SDN offers a logically centralized control plane by abstracting away the control of the network devices in the data plane. NFV virtualizes dedicated hardware middleboxes and deploys them on top of servers and data centers as network functions. Thus, network softwarization enables efficient management of the system, by enhancing its control and improving the reusability of the network services. In this work, we aim at exploiting network softwarization to compose workflows of distinct services, in heterogeneous infrastructures, ranging from data centers to the edge. We thus intend to mitigate the challenges concerning resource management and interoperability of heterogeneous infrastructures, to efficiently compose service workflows, while sharing the network and the computing resources across several users. To this end, we propose three significant contributions. First, we extend SDN in cloud and data center environments to unify various phases of development and deploy the workloads seamlessly, from simulations and emulations to physical deployment environments. We further extend this work to support multiple Service Level Agreements (SLAs) across diverse network flows in the data centers, by selectively enforcing redundancy on the network flows. Thus, we aim for Quality of Service (QoS) and efficient resource provisioning, while adhering to the policies of the users. Finally, we design a cloud-assisted overlay network, as a latency-aware virtual connectivity provider. Consequently, we propose cost-efficient data transfers at Internet scale, by separating the network from the infrastructure. Second, we propose a scalable architecture to compose service chains in wide area networks efficiently. We extend SDN and Message-Oriented Middleware (MOM), for a logically centralized composition and execution of service workflows. We thus propose a Software-Defined Service Composition (SDSC) framework for web service compositions, Network Service Chains (NSCs), and a network-aware execution of data services. We further present Software-Defined Systems (SDS) consisting of virtual network allocation strategies for multi-tenant service executions in large-scale networks comprised of multiple domains. Third, we investigate how our proposed SDS can operate efficiently for real-world application scenarios of heterogeneous infrastructures. While traditionally web services are built following standards and best practices such as Web Services Description Language (WSDL), network services and data services offered by different service providers often fall short in providing common Application Programming Interfaces (APIs), often resulting in vendor lock-in. We look into facilitating interoperability across service implementations and deployments, to enable seamless migrations. We propose big data applications and smart environments such as Cyber-Physical Systems (CPS) and the Internet of Things (IoT) as our two application scenarios. We thus build CPS and big data applications as composable service chains, offering them an interoperable execution.

access_time July 01, 2019 at 03:00PM
place Sala 0.17, Pavilhão de Informática II do IST, Alameda
local_offer Doctoral exam
person Candidate: Pradeeban Kathiravelu
supervisor_account Advisor: Prof. Luís Manuel Antunes Veiga

Browse the calendar to discover the events.