3. Iserlohner Konferenzseminar
zur Angewandten Informatik

IKAI@swf (Winter 2023/24)

Themenvorschläge, Master-Studiengang

(sortiert nach Betreuer)

Prof. Dr. Doga Arinir

Prof. Dr. Christian Gawron

Prof. Dr. Heiner Giefers

[1] J. White et al., ‘A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT’. arXiv, Feb. 21, 2023. doi: 10.48550/arXiv.2302.11382.

[2] Q. Dong et al., ‘A Survey on In-context Learning’. arXiv, Jun. 01, 2023. doi: 10.48550/arXiv.2301.00234.

[3] L. Floridi, ‘AI as Agency Without Intelligence: on ChatGPT, Large Language Models, and Other Generative Models’, Philos. Technol., vol. 36, no. 1, p. 15, Mar. 2023, doi: 10.1007/s13347-023-00621-y.

[4] A. Vaswani et al., ‘Attention is All you Need’, in Advances in Neural Information Processing Systems, Curran Associates, Inc., 2017. Accessed: Aug. 25, 2023. [Online]. Available: https://proceedings.neurips.cc/paper_files/paper/2017/hash/3f5ee243547dee91fbd053c1c4a845aa-Abstract.html

[5] C. Xu, D. Guo, N. Duan, and J. McAuley, ‘Baize: An Open-Source Chat Model with Parameter-Efficient Tuning on Self-Chat Data’. arXiv, May 23, 2023. Accessed: Aug. 28, 2023. [Online]. Available: http://arxiv.org/abs/2304.01196

[6] J. Wei et al., ‘Chain-of-Thought Prompting Elicits Reasoning in Large Language Models’. arXiv, Jan. 10, 2023. doi: 10.48550/arXiv.2201.11903.

[7] L. De Angelis et al., ‘ChatGPT and the rise of large language models: the new AI-driven infodemic threat in public health’, Frontiers in Public Health, vol. 11, 2023, Accessed: Aug. 28, 2023. [Online]. Available: https://www.frontiersin.org/articles/10.3389/fpubh.2023.1166120

[8] T. Hagendorff, ‘Deception Abilities Emerged in Large Language Models’. arXiv, Jul. 31, 2023. doi: 10.48550/arXiv.2307.16513.

[9] R. Zellers et al., ‘Defending Against Neural Fake News’. arXiv, Dec. 11, 2020. Accessed: Aug. 28, 2023. [Online]. Available: http://arxiv.org/abs/1905.12616

[10] C.-Y. Hsieh et al., ‘Distilling Step-by-Step! Outperforming Larger Language Models with Less Training Data and Smaller Model Sizes’. arXiv, Jul. 05, 2023. Accessed: Aug. 28, 2023. [Online]. Available: http://arxiv.org/abs/2305.02301

[11] M. Ahn et al., ‘Do As I Can, Not As I Say: Grounding Language in Robotic Affordances’. arXiv, Aug. 16, 2022. Accessed: Aug. 28, 2023. [Online]. Available: http://arxiv.org/abs/2204.01691

[12] N. Perry, M. Srivastava, D. Kumar, and D. Boneh, ‘Do Users Write More Insecure Code with AI Assistants?’ arXiv, Dec. 16, 2022. Accessed: Aug. 28, 2023. [Online]. Available: http://arxiv.org/abs/2211.03622

[13] M. Chen et al., ‘Evaluating Large Language Models Trained on Code’. arXiv, Jul. 14, 2021. Accessed: Aug. 29, 2023. [Online]. Available: http://arxiv.org/abs/2107.03374

[14] J. Wei et al., ‘Finetuned Language Models Are Zero-Shot Learners’. arXiv, Feb. 08, 2022. doi: 10.48550/arXiv.2109.01652.

[15] B. Guo et al., ‘How Close is ChatGPT to Human Experts? Comparison Corpus, Evaluation, and Detection’. arXiv, Jan. 18, 2023. doi: 10.48550/arXiv.2301.07597.

[16] A. Radford, K. Narasimhan, T. Salimans, and I. Sutskever, ‘Improving Language Understanding by Generative Pre-Training’.

[17] X. Shen, Z. Chen, M. Backes, and Y. Zhang, ‘In ChatGPT We Trust? Measuring and Characterizing the Reliability of ChatGPT’. arXiv, Apr. 18, 2023. doi: 10.48550/arXiv.2304.08979.

[18] J. Liu, C. S. Xia, Y. Wang, and L. Zhang, ‘Is Your Code Generated by ChatGPT Really Correct? Rigorous Evaluation of Large Language Models for Code Generation’. arXiv, Jun. 12, 2023. Accessed: Aug. 29, 2023. [Online]. Available: http://arxiv.org/abs/2305.01210

[19] W. Huang, P. Abbeel, D. Pathak, and I. Mordatch, ‘Language Models as Zero-Shot Planners: Extracting Actionable Knowledge for Embodied Agents’. arXiv, Mar. 08, 2022. Accessed: Aug. 28, 2023. [Online]. Available: http://arxiv.org/abs/2201.07207

[20] K. Valmeekam, A. Olmo, S. Sreedharan, and S. Kambhampati, ‘Large Language Models Still Can’t Plan (A Benchmark for LLMs on Planning and Reasoning about Change)’. arXiv, Apr. 07, 2023. Accessed: Aug. 28, 2023. [Online]. Available: http://arxiv.org/abs/2206.10498

[21] H. Touvron et al., ‘LLaMA: Open and Efficient Foundation Language Models’. arXiv, Feb. 27, 2023. doi: 10.48550/arXiv.2302.13971.

[22] L. Gao et al., ‘PAL: Program-aided Language Models’. arXiv, Jan. 27, 2023. doi: 10.48550/arXiv.2211.10435.

[23] I. Singh et al., ‘ProgPrompt: Generating Situated Robot Task Plans using Large Language Models’. arXiv, Sep. 22, 2022. Accessed: Aug. 28, 2023. [Online]. Available: http://arxiv.org/abs/2209.11302

[24] L. Beurer-Kellner, M. Fischer, and M. Vechev, ‘Prompting Is Programming: A Query Language for Large Language Models’, Proc. ACM Program. Lang., vol. 7, no. PLDI, pp. 1946–1969, Jun. 2023, doi: 10.1145/3591300.

[25] D. Cai, Y. Wang, L. Liu, and S. Shi, ‘Recent Advances in Retrieval-Augmented Text Generation’, in Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, Madrid Spain: ACM, Jul. 2022, pp. 3417–3419. doi: 10.1145/3477495.3532682.

[26] E. Cambiaso and L. Caviglione, ‘Scamming the Scammers: Using ChatGPT to Reply Mails for Wasting Time and Resources’. arXiv, Feb. 10, 2023. Accessed: Aug. 28, 2023. [Online]. Available: http://arxiv.org/abs/2303.13521

[27] T. Schick et al., ‘Toolformer: Language Models Can Teach Themselves to Use Tools’. arXiv, Feb. 09, 2023. Accessed: Aug. 28, 2023. [Online]. Available: http://arxiv.org/abs/2302.04761

[28] L. Ouyang et al., ‘Training language models to follow instructions with human feedback’. arXiv, Mar. 04, 2022. doi: 10.48550/arXiv.2203.02155.

[29] A. Zou, Z. Wang, J. Z. Kolter, and M. Fredrikson, ‘Universal and Transferable Adversarial Attacks on Aligned Language Models’. arXiv, Jul. 27, 2023. doi: 10.48550/arXiv.2307.15043.

[30] J. D. Zamfirescu-Pereira, R. Y. Wong, B. Hartmann, and Q. Yang, ‘Why Johnny Can’t Prompt: How Non-AI Experts Try (and Fail) to Design LLM Prompts’, in Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, Hamburg Germany: ACM, Apr. 2023, pp. 1–21. doi: 10.1145/3544548.3581388.

[31] Z. Luo et al., ‘WizardCoder: Empowering Code Large Language Models with Evol-Instruct’. arXiv, Jun. 14, 2023. Accessed: Aug. 29, 2023. [Online]. Available: http://arxiv.org/abs/2306.08568

Prof. Dr. Hans-Georg Eßer



Kontakt:

Prof. Arinir, Doga: arinir.doga@fh-swf.de

Prof. Eßer, Hans-Georg: esser.hans-georg@fh-swf.de

Prof. Gawron, Christian: gawron.christian@fh-swf.de

Prof. Giefers, Heiner: giefers.heiner@fh-swf.de



(hge, 2023-10-05)