Learning to Ask Screening Questions for Job Postings

Li Shan
Li Shan
Yang Jaewon
Yang Jaewon
Kazdagli Mustafa Emre
Kazdagli Mustafa Emre

SIGIR '20: The 43rd International ACM SIGIR conference on research and development in Information Retrieval Virtual Event China July, 2020, pp. 549-558, 2020.

Cited by: 0|Bibtex|Views19|Links
EI
Keywords:
job postinglearning modellinkedinneural network language modelactive jobMore(24+)
Weibo:
We proposed a novel Screening Question Generation task that automatically generates screening questions for job postings

Abstract:

At LinkedIn, we want to create economic opportunity for everyone in the global workforce. A critical aspect of this goal is matching jobs with qualified applicants. To improve hiring efficiency and reduce the need to manually screening each applicant, we develop a new product where recruiters can ask screening questions online so that the...More

Code:

Data:

0
Introduction
  • LinkedIn is the largest hiring marketplace in the world, hosting over 20 million active job postings that are created across various channels, including LinkedIn’s on-site recruiting products and integrations with external hiring products.

    ⋆ These authors contributed .

    Screening Question Template

    Screening Question Parameter

    Tools-typed Screening Question

    Screening Question Template Screening Question Parameter

    Education-typed Screening Question

    Language-typed Screening Question

    In hiring, interviewing applicants is costly and inefficient.
  • Existing methods aim to match job postings based on the members’ experiences [32] or based on members’ profile attributes [8, 13, 26, 42]
  • These models heavily rely on the assumption that applicants’ online profile and resume are always up-to-date and contain all the information that hiring companies need.
  • In Sec. 6.4 the authors find that job posting text is sub-optimal for modeling job qualifications due to trivial and unnecessary requirements
Highlights
  • LinkedIn is the largest hiring marketplace in the world, hosting over 20 million active job postings that are created across various channels, including LinkedIn’s on-site recruiting products and integrations with external hiring products.

    ⋆ These authors contributed .

    Screening Question Template

    Screening Question Parameter

    Tools-typed Screening Question

    Screening Question Template Screening Question Parameter

    Education-typed Screening Question

    Language-typed Screening Question

    In hiring, interviewing applicants is costly and inefficient
  • With all above challenges in mind, here we propose a two-step Screening Question Generation (SQG) model named Job2Questions that given the content of a job posting, first generates all possible structured Screening Question (SQ) candidates using a deep learning model, and ranks and identifies top-k screening questions as the model output
  • We evaluated four question ranking models listed in Sec. 6.1 using 22, 055 triples from 6, 675 jobs and report the Area Under the Receiver Operating Characteristic curve (AUROC), Precision@k, Recall@k, and Normalized Discounted Cumulative Gain at k (NDCG@k)
  • We proposed a novel Screening Question Generation (SQG) task that automatically generates screening questions for job postings
  • We developed a general candidate-generation-andranking SQG framework and presented LinkedIn’s in-production Job2Questions model
  • We used BERT as our sentence encoder to see how advanced NLP model can help improve the performance in this specific task
  • We provided design details of Job2Questions, including data preparation, deep transfer learning-based question template classification modeling, parameter extraction, and XGBoostbased question ranking
Methods
  • The authors conducted extensive evaluations on the proposed Job2Questions (J2Q for short) model.
  • All above models are trained using the same dataset as described in Tab. 3.
  • For neural network J2Q-TC-⋆ models, the authors use the publicavailable pre-trained models [2, 7] as initialization and fine-tune them .
  • For J2Q-TC-{NNLM, DAN, CNN}, the authors set the learning rate to 1e − 3, batch size to 256, and drop-out rate to 0.4.
  • For J2Q-TC-BERT, the authors further truncate the input sentence to 32 tokens and set the learning rate to 5e-5.
  • All models are trained for at most 100 epochs with a 3-layer MLP
Results
  • Jobs adopted SQ suggestions yielded 190% more recruiter-applicant interactions.
  • The authors used BERT as the sentence encoder to see how advanced NLP model can help improve the performance in this specific task.
  • As shown in Tab. 6 the proposed J2Q-QR-XGB-pairwise model outperforms other baselines with up to 24.03% improvement in NDCG.
  • The authors found that recruiters explicitly mention requirements such as “Access to computer with scanning, printing and faxing capabilities” or “Good working knowledge of Internet Explorer”, more than 98% of the cases recruiters do not screen applicants based on these
Conclusion
  • The authors proposed a novel Screening Question Generation (SQG) task that automatically generates screening questions for job postings.
  • The authors provided design details of Job2Questions, including data preparation, deep transfer learning-based question template classification modeling, parameter extraction, and XGBoostbased question ranking.
  • As for future work, the authors plan to infer SQs that are not explicitly mentioned in the job posting and investigate advanced question ranking methods to better model recruiter preferences.
  • The authors plan to investigate seq2seq models for template-free SQ generation
Summary
  • Introduction:

    LinkedIn is the largest hiring marketplace in the world, hosting over 20 million active job postings that are created across various channels, including LinkedIn’s on-site recruiting products and integrations with external hiring products.

    ⋆ These authors contributed .

    Screening Question Template

    Screening Question Parameter

    Tools-typed Screening Question

    Screening Question Template Screening Question Parameter

    Education-typed Screening Question

    Language-typed Screening Question

    In hiring, interviewing applicants is costly and inefficient.
  • Existing methods aim to match job postings based on the members’ experiences [32] or based on members’ profile attributes [8, 13, 26, 42]
  • These models heavily rely on the assumption that applicants’ online profile and resume are always up-to-date and contain all the information that hiring companies need.
  • In Sec. 6.4 the authors find that job posting text is sub-optimal for modeling job qualifications due to trivial and unnecessary requirements
  • Methods:

    The authors conducted extensive evaluations on the proposed Job2Questions (J2Q for short) model.
  • All above models are trained using the same dataset as described in Tab. 3.
  • For neural network J2Q-TC-⋆ models, the authors use the publicavailable pre-trained models [2, 7] as initialization and fine-tune them .
  • For J2Q-TC-{NNLM, DAN, CNN}, the authors set the learning rate to 1e − 3, batch size to 256, and drop-out rate to 0.4.
  • For J2Q-TC-BERT, the authors further truncate the input sentence to 32 tokens and set the learning rate to 5e-5.
  • All models are trained for at most 100 epochs with a 3-layer MLP
  • Results:

    Jobs adopted SQ suggestions yielded 190% more recruiter-applicant interactions.
  • The authors used BERT as the sentence encoder to see how advanced NLP model can help improve the performance in this specific task.
  • As shown in Tab. 6 the proposed J2Q-QR-XGB-pairwise model outperforms other baselines with up to 24.03% improvement in NDCG.
  • The authors found that recruiters explicitly mention requirements such as “Access to computer with scanning, printing and faxing capabilities” or “Good working knowledge of Internet Explorer”, more than 98% of the cases recruiters do not screen applicants based on these
  • Conclusion:

    The authors proposed a novel Screening Question Generation (SQG) task that automatically generates screening questions for job postings.
  • The authors provided design details of Job2Questions, including data preparation, deep transfer learning-based question template classification modeling, parameter extraction, and XGBoostbased question ranking.
  • As for future work, the authors plan to infer SQs that are not explicitly mentioned in the job posting and investigate advanced question ranking methods to better model recruiter preferences.
  • The authors plan to investigate seq2seq models for template-free SQ generation
Tables
  • Table1: Statistics of popular question generation datasets
  • Table2: Empirical CPU inference time per sentence. J2Q-TCDAN is our current in production model
  • Table3: Screening Question Generation dataset statistics
  • Table4: Examples of the crowd sourcing annotation task
  • Table5: Question template classification offline evaluation
  • Table6: Question ranking offline evaluation
  • Table7: Question ranking feature ablation study
  • Table8: Screening question suggestion online A/B test
  • Table9: Relationships between SQ answers and user profile
  • Table10: Question rejection rate case study
  • Table11: Per-industry SQ type distribution
Download tables as Excel
Related work
  • Rule-based Question Generation. Rule-based models usually transform and formulate the questions based on the text input using a series of hand-crafted rules. ELIZA [39] generates question responses for conversations using human-made, keyword-based rules.

    Sentence Tokenization Job Posting

    Fluent in Japanese. s1

    AAAB7HicbZDLSsNAFIZPrJdab1Vx5SbYCq5KIgVdFty4rGDaQhvKZHrSDp1MwsxEKKHP4MaFIm59D1/BheDKR9HpZaGtPwx8/P85zDknSDhT2nE+rZXc6tr6Rn6zsLW9s7tX3D9oqDiVFD0a81i2AqKQM4GeZppjK5FIooBjMxheTfLmHUrFYnGrRwn6EekLFjJKtLG8suq65W6x5FScqexlcOdQquU+vt+OvrDeLb53ejFNIxSacqJU23US7WdEakY5jgudVGFC6JD0sW1QkAiVn02HHdunxunZYSzNE9qeur87MhIpNYoCUxkRPVCL2cT8L2unOrz0MyaSVKOgs4/ClNs6tieb2z0mkWo+MkCoZGZWmw6IJFSb+xTMEdzFlZehcV5xq5XqjVuqOTBTHo7hBM7AhQuowTXUwQMKDO7hEZ4sYT1Yz9bLrHTFmvccwh9Zrz9cv5Ib s2

    AAAB7HicbZC7SgNBFIbPJl5ivEXFymYxEazCbghoGbCxjOAmgSSE2cnZZMjs7DIzK4Qlz2BjoYit7+ErWAhWPopOLoUm/jDw8f/nMOccP+ZMacf5tDLZtfWNzdxWfntnd2+/cHDYUFEiKXo04pFs+UQhZwI9zTTHViyRhD7Hpj+6mubNO5SKReJWj2PshmQgWMAo0cbySqpXKfUKRafszGSvgruAYi378f12/IX1XuG9049oEqLQlBOl2q4T625KpGaU4yTfSRTGhI7IANsGBQlRddPZsBP7zDh9O4ikeULbM/d3R0pCpcahbypDoodqOZua/2XtRAeX3ZSJONEo6PyjIOG2juzp5nafSaSajw0QKpmZ1aZDIgnV5j55cwR3eeVVaFTKbrVcvXGLNQfmysEJnMI5uHABNbiGOnhAgcE9PMKTJawH69l6mZdmrEXPEfyR9foDXkSSHA== s3

    AAAB7HicbZC7SgNBFIbPJl5ivEXFymYxEazCrga0DNhYRnCTQLKE2clJMmR2dpmZFcKSZ7CxUMTW9/AVLAQrH0Unl0ITfxj4+P9zmHNOEHOmtON8Wpnsyuraem4jv7m1vbNb2NuvqyiRFD0a8Ug2A6KQM4GeZppjM5ZIwoBjIxheTfLGHUrFInGrRzH6IekL1mOUaGN5JdU5L3UKRafsTGUvgzuHYjX78f12+IW1TuG93Y1oEqLQlBOlWq4Taz8lUjPKcZxvJwpjQoekjy2DgoSo/HQ67Ng+MU7X7kXSPKHtqfu7IyWhUqMwMJUh0QO1mE3M/7JWonuXfspEnGgUdPZRL+G2juzJ5naXSaSajwwQKpmZ1aYDIgnV5j55cwR3ceVlqJ+V3Uq5cuMWqw7MlIMjOIZTcOECqnANNfCAAoN7eIQnS1gP1rP1MivNWPOeA/gj6/UHX8mSHQ==

    Question Template Classification sns AAAB8HicbZDLSgMxFIbPeK31NupK3ARbwVWZkYIuC4K4bMFepB2GTJq2oUlmSDJCGfoUblwo4tbHcecLuPcNTC8Lbf0h8PH/55BzTpRwpo3nfTorq2vrG5u5rfz2zu7evntw2NBxqgitk5jHqhVhTTmTtG6Y4bSVKIpFxGkzGl5P8uYDVZrF8s6MEhoI3Jesxwg21rov6jCToR4XQ7fglbyp0DL4cyhU3NrXzXfjuBq6H51uTFJBpSEca932vcQEGVaGEU7H+U6qaYLJEPdp26LEguogmw48RmfW6aJerOyTBk3d3x0ZFlqPRGQrBTYDvZhNzP+ydmp6V0HGZJIaKsnso17KkYnRZHvUZYoSw0cWMFHMzorIACtMjL1R3h7BX1x5GRoXJb9cKtf8QsWDmXJwAqdwDj5cQgVuoQp1ICDgEZ7hxVHOk/PqvM1KV5x5zxH8kfP+A67hk0U=
Reference
  • Yoshua Bengio, RÃľjean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. A Neural Probabilistic Language Model. Journal of Machine Learning Research 3, Feb (2003), 1137–1155.
    Google ScholarLocate open access versionFindings
  • Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, YunHsuan Sung, Brian Strope, and Ray Kurzweil. 2018. Universal Sentence Encoder. arXiv:1803.11175 [cs] (April 2018). arXiv: 1803.11175.
    Findings
  • Yllias Chali and Tina Baghaee. 2018. Automatic opinion question generation. In Proceedings of the 11th International Conference on Natural Language Generation. 152–158.
    Google ScholarLocate open access versionFindings
  • Guanliang Chen, Jie Yang, Claudia Hauff, and Geert-Jan Houben. 2018. LearningQ: A Large-Scale Dataset for Educational Question Generation. In Twelfth International AAAI Conference on Web and Social Media.
    Google ScholarLocate open access versionFindings
  • Tianqi Chen and Carlos Guestrin. 2016. XGBoost: A Scalable Tree Boosting System. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining - KDD ’16. ACM Press, San Francisco, California, USA, 785–794.
    Google ScholarLocate open access versionFindings
  • Yu-An Chung, Hung-Yi Lee, and James Glass. 2018. Supervised and Unsupervised Transfer Learning for Question Answering. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers). Association for Computational Linguistics, New Orleans, Louisiana, 1585–1594.
    Google ScholarLocate open access versionFindings
  • Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv:1810.04805 [cs] (Oct. 2018). arXiv: 1810.04805.
    Findings
  • Mamadou Diaby, Emmanuel Viennet, and Tristan Launay. 2013. Toward the next generation of recruitment tools: an online social network-based job recommender system. In 2013 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM 2013). IEEE, 821–828.
    Google ScholarLocate open access versionFindings
  • Xinya Du, Junru Shao, and Claire Cardie. 2017. Learning to Ask: Neural Question Generation for Reading Comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Vancouver, Canada, 1342–1352.
    Google ScholarLocate open access versionFindings
  • Nan Duan, Duyu Tang, Peng Chen, and Ming Zhou. 2017. Question Generation for Question Answering. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Copenhagen, Denmark, 866–874.
    Google ScholarLocate open access versionFindings
  • Yifan Gao, Lidong Bing, Wang Chen, Michael R Lyu, and Irwin King. 2019. Difficulty controllable generation of reading comprehension questions. In Proc. 28th International Joint Conference on Artificial Intelligence (IJCAI).
    Google ScholarLocate open access versionFindings
  • Caglar Gulcehre, Sungjin Ahn, Ramesh Nallapati, Bowen Zhou, and Yoshua Bengio. 2016. Pointing the unknown words. arXiv preprint arXiv:1603.08148 (2016).
    Findings
  • Viet Ha-Thuc, Ye Xu, Satya Pradeep Kanduri, Xianren Wu, Vijay Dialani, Yan Yan, Abhishek Gupta, and Shakti Sinha. 2016. Search by ideal candidates: Next generation of talent search at linkedin. In Proceedings of the 25th International Conference Companion on World Wide Web. International World Wide Web Conferences Steering Committee, 195–198.
    Google ScholarLocate open access versionFindings
  • Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep Residual Learning for Image Recognition. 770–778.
    Google ScholarFindings
  • Michael Heilman and Noah A. Smith. 2009. Question Generation via Overgenerating Transformations and Ranking:. Technical Report. Defense Technical Information Center, Fort Belvoir, VA.
    Google ScholarLocate open access versionFindings
  • Chao Huang, Xian Wu, Xuchao Zhang, Chuxu Zhang, Jiashu Zhao, Dawei Yin, and Nitesh V. Chawla. 2019. Online Purchase Prediction via Multi-Scale Modeling of Behavior Dynamics. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (KDD ’19). Association for Computing Machinery, Anchorage, AK, USA, 2613–2622.
    Google ScholarLocate open access versionFindings
  • Sergey Ioffe and Christian Szegedy. 2015. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. arXiv:1502.03167 [cs] (March 2015). arXiv: 1502.03167.
    Findings
  • Mohit Iyyer, Varun Manjunatha, Jordan Boyd-Graber, and Hal DaumÃľ III. 2015. Deep Unordered Composition Rivals Syntactic Methods for Text Classification. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics, Beijing, China, 1681–1691.
    Google ScholarLocate open access versionFindings
  • Xiaoqi Jiao, Fang Wang, and Dan Feng. 2018. Convolutional Neural Network for Universal Sentence Embeddings. In Proceedings of the 27th International Conference on Computational Linguistics. Association for Computational Linguistics, Santa Fe, New Mexico, USA, 2470–2481.
    Google ScholarLocate open access versionFindings
  • Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2016. Bag of Tricks for Efficient Text Classification. arXiv:1607.01759 [cs] (Aug. 2016). arXiv: 1607.01759.
    Findings
  • Krishnaram Kenthapadi, Benjamin Le, and Ganesh Venkataraman. 2017. Personalized Job Recommendation System at LinkedIn: Practical Challenges and Lessons Learned. In Proceedings of the Eleventh ACM Conference on Recommender Systems (RecSys ’17). Association for Computing Machinery, Como, Italy, 346–347.
    Google ScholarLocate open access versionFindings
  • Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, and Ping Tak Peter Tang. 2017. On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima. arXiv:1609.04836 [cs, math] (Feb. 2017). arXiv: 1609.04836.
    Findings
  • Diederik P. Kingma and Jimmy Ba. 2017. Adam: A Method for Stochastic Optimization. arXiv:1412.6980 [cs] (Jan. 2017). arXiv: 1412.6980.
    Findings
  • Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard Hovy. 2017. RACE: Large-scale ReAding Comprehension Dataset From Examinations. arXiv:1704.04683 [cs] (Dec. 2017). arXiv: 1704.04683.
    Findings
  • Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural Architectures for Named Entity Recognition. In NAACL-HLT. ACL, San Diego, California, 260–270.
    Google ScholarFindings
  • Ran Le, Wenpeng Hu, Yang Song, Tao Zhang, Dongyan Zhao, and Rui Yan. 2019. Towards Effective and Interpretable Person-Job Fitting. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management. 1883–1892.
    Google ScholarLocate open access versionFindings
  • Zhouhan Lin, Minwei Feng, Cicero Nogueira dos Santos, Mo Yu, Bing Xiang, Bowen Zhou, and Yoshua Bengio. 2017. A Structured Self-attentive Sentence Embedding. arXiv:1703.03130 [cs] (March 2017). arXiv: 1703.03130.
    Findings
  • Xuezhe Ma and Eduard Hovy. 2016. End-to-end Sequence Labeling via Bidirectional LSTM-CNNs-CRF. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Berlin, Germany, 1064–1074.
    Google ScholarLocate open access versionFindings
  • Ruslan Mitkov. 2003. Computer-aided generation of multiple-choice tests. In Proceedings of the HLT-NAACL 03 workshop on Building educational applications using natural language processing. 17–22.
    Google ScholarLocate open access versionFindings
  • Sinno Jialin Pan and Qiang Yang. 2010. A Survey on Transfer Learning. IEEE Transactions on Knowledge and Data Engineering 22, 10 (Oct. 2010), 1345–1359.
    Google ScholarLocate open access versionFindings
  • Ioannis Paparrizos, B. Barla Cambazoglu, and Aristides Gionis. 2011. Machine learned job recommendation. In Proceedings of the fifth ACM conference on Recommender systems (RecSys ’11). Association for Computing Machinery, Chicago, Illinois, USA, 325–328.
    Google ScholarLocate open access versionFindings
  • Chuan Qin, Hengshu Zhu, Tong Xu, Chen Zhu, Liang Jiang, Enhong Chen, and Hui Xiong. 2018. Enhancing person-job fit for talent recruitment: An abilityaware neural network approach. In The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval. ACM, 25–34.
    Google ScholarLocate open access versionFindings
  • Chuan Qin, Hengshu Zhu, Chen Zhu, Tong Xu, Fuzhen Zhuang, Chao Ma, Jingshuai Zhang, and Hui Xiong. 2019. DuerQuiz: A Personalized Question Recommender System for Intelligent Job Interview. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 2165– 2173.
    Google ScholarLocate open access versionFindings
  • Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ Questions for Machine Comprehension of Text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Austin, Texas, 2383–2392.
    Google ScholarLocate open access versionFindings
  • Frederick F Reichheld. 2003. The one number you need to grow. Harvard business review 81, 12 (2003), 46–55.
    Google ScholarLocate open access versionFindings
  • Xingwu Sun, Jing Liu, Yajuan Lyu, Wei He, Yanjun Ma, and Shi Wang. 2018. Answer-focused and position-aware neural question generation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. 3930–3939.
    Google ScholarLocate open access versionFindings
  • Duyu Tang, Nan Duan, Tao Qin, Zhao Yan, and Ming Zhou. 2017. Question answering and question generation as dual tasks. arXiv preprint arXiv:1706.02027 (2017).
    Findings
  • Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, ÅĄukasz Kaiser, and Illia Polosukhin. 2017. Attention is All you Need. In Advances in Neural Information Processing Systems 30, I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (Eds.). Curran Associates, Inc., 5998–6008.
    Google ScholarLocate open access versionFindings
  • Joseph Weizenbaum. 1966. ELIZA—a computer program for the study of natural language communication between man and machine. Commun. ACM 9, 1 (1966), 36–45.
    Google ScholarLocate open access versionFindings
  • Yao Zhao, Xiaochuan Ni, Yuanyuan Ding, and Qifa Ke. 2018. Paragraph-level Neural Question Generation with Maxout Pointer and Gated Self-attention Networks. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Brussels, Belgium, 3901–3910.
    Google ScholarLocate open access versionFindings
  • Qingyu Zhou, Nan Yang, Furu Wei, Chuanqi Tan, Hangbo Bao, and Ming Zhou. 2018. Neural Question Generation from Text: A Preliminary Study. In Natural Language Processing and Chinese Computing (Lecture Notes in Computer Science). Springer International Publishing, Cham, 662–671.
    Google ScholarLocate open access versionFindings
  • Chen Zhu, Hengshu Zhu, Hui Xiong, Chao Ma, Fang Xie, Pengliang Ding, and Pan Li. 2018. Person-Job Fit: Adapting the Right Talent for the Right Job with Joint Representation Learning. ACM Transactions on Management Information Systems (TMIS) 9, 3 (2018), 12.
    Google ScholarLocate open access versionFindings
Your rating :
0

 

Tags
Comments