Posts

Showing posts from October, 2018

Blog Post #3

I realize that I have not yet explained why I chose to do Automatic Text summarization. To keep things short, I've never really enjoyed reading for academic purposes. I like reading for the purpose of learning about topics I am interested in (i.e. Computer Science) and for general reading fictions that have no academic values, but when it comes to reading for classes that I am taking for the sake of the credits, I find it hard to be fully indulged in the reading material. From talking to fellow peers, I realized that that was the general consensus. With the rise of Machine Learning and Deep Learning, I figured it would be interesting to look into developing an automatic abstract text summarization tool for students such as myself to utilize.  Picking up from the last blog post... A lot has happened. I decided to explore Google/Stanford's model which utilizes a pointer system and a coverage system to account for some of the problems that the general models for text summariza...

Update 9/23

After talking with Professor Li, I was able to narrow down my project into something more feasible. I decided to throw out the implementation part of the project completely and focus more into the model aspect. Also, instead of creating my own model, I decided, with the help of Professor Li, to use one of the working models out there and explore some of the problems that the model faces and look for any possible solutions. This reduces the scope of the project into a project that I can actually finish in a semester and I was glad that I received some guidance on how to do so. On to the topic of what I've done with the project... Over the summer, I conducted some research and learned as much as possible about automatic text summarization. Automatic text summarization is considered a Sequence-to-Sequence Prediction Problem (seq2seq) which means that it's a prediction problem that takes a sequence as input and requires another sequence as output. A model structure that most pe...