Andrea Santilli
Andrea Santilli
Home
Publications
Experience
Contact
Light
Dark
Automatic
1
Accelerating Transformer Inference for Translation via Parallel Decoding
Autoregressive decoding limits the efficiency of transformers for Machine Translation (MT). The community proposed specific network …
Andrea Santilli
,
Silvio Severino
,
Emilian Postolache
,
Valentino Maiorca
,
Michele Mancusi
,
Riccardo Marin
,
Emanuele Rodolà
PDF
Cite
arXiv
GitHub
Multimodal Neural Databases
The rise in loosely-structured data available through text, images, and other modalities has called for new ways of querying them. …
Giovanni Trappolini
,
Andrea Santilli
,
Emanuele Rodolà
,
Alon Halevy
,
Fabrizio Silvestri
PDF
Cite
arXiv
Latent Autoregressive Source Separation
Autoregressive models have achieved impressive results over a wide range of domains in terms of generation quality and downstream task …
Emilian Polostache
,
Giorgio Mariani
,
Michele Mancusi
,
Andrea Santilli
,
Luca Cosmo
,
Emanuele Rodolà
Cite
AAAI 2023
Multitask Prompted Training Enables Zero-Shot Task Generalization
Large language models have recently been shown to attain reasonable zero-shot generalization on a diverse set of tasks (Brown et al., …
Victor Sanh
,
Albert Webson
,
Colin Raffel
,
Stephen H. Bach
,
BIG-Science contributors including
,
Andrea Santilli
PDF
Cite
ICLR 2022 (Oral)
KERMIT: Complementing Transformer Architectures with Encoders of Explicit Syntactic Interpretations
Syntactic parsers have dominated natural language understanding for decades. Yet, their syntactic interpretations are losing centrality …
Fabio Massimo Zanzotto
,
Andrea Santilli
,
Leonardo Ranaldi
,
Dario Onorati
,
Pierfrancesco Tommasino
,
Francesca Fallucchi
PDF
Cite
DOI
EMNLP 2020
A Kernel-based Approach for Irony and Sarcasm Detection in Italian
This paper describes the UNITOR system that participated to the Irony Detection in Italian Tweets task (IronITA) within the context of …
Andrea Santilli
,
Danilo Croce
,
Roberto Basili
PDF
Cite
DOI
EVALITA 2018
SyntNN at SemEval-2018 Task 2: is Syntax Useful for Emoji Prediction? Embedding Syntactic Trees in Multi Layer Perceptrons
In this paper, we present SyntNN as a way to include traditional syntactic models in multilayer neural networks used in the task of …
Andrea Santilli
,
Fabio Massimo Zanzotto
PDF
Cite
DOI
SemEval 2018
Cite
×