Join the channel membership:
https://www.youtube.com/c/AIPursuit/joinSubscribe to the channel:
https://www.youtube.com/c/AIPursuit?sub_confirmation=1Support and Donation:
Paypal ⇢
https://paypal.me/tayhengeePatreon ⇢
https://www.patreon.com/hengeeBTC ⇢ bc1q2r7eymlf20576alvcmryn28tgrvxqw5r30cmpu
ETH ⇢ 0x58c4bD4244686F3b4e636EfeBD159258A5513744
Doge ⇢ DSGNbzuS1s6x81ZSbSHHV5uGDxJXePeyKy
Wanted to own BTC, ETH, or even Dogecoin? Kickstart your crypto portfolio with the largest crypto market Binance with my affiliate link:
https://accounts.binance.com/en/register?ref=27700065 BuyMeACoffee:
https://www.buymeacoffee.com/angustay-----------------------------------------------------------------------------------------
The video is reposted for educational purposes.
Source:
https://vimeo.com/285806034Abstract:
Text simplification (TS) is a monolingual text-to-text transformation task where an original (complex) text is trans-formed into a target (simpler) text. Most recent work is based on sequence-to-sequence neural models similar to those used for machine translation (MT). Different from MT, TS data comprises more elaborate transformations, such as sentence splitting. It can also contain multiple simplifications of the same original text targeting different audiences, such as school grade levels. We explore these two features of TS to build models tailored for specific grade levels. Our approach uses a standard sequence-to-sequence architecture where the original sequence is annotated with information about the target audience and/or the (predicted) type of simplification operation. We show that it outperforms state-of-the-art TS approaches (up to 3 and 12 BLEU and SARI points, respectively), including when training data for the specific complex-simple combination of grade levels is not available, i.e. zero-shot learning.
...
https://www.youtube.com/watch?v=2KzawQu_4sw