Replies: 2 comments 2 replies
-
@rickyloynd-microsoft @gagb might be interested in this topic. |
Beta Was this translation helpful? Give feedback.
1 reply
-
This is a great questions! Putting on my researcher hat -- its an open question and we'd probably have to implement both and measure empirical performance! |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Off-the-cuff, teachability sounds like a game changer for code-assisting LMs, but what are the limitations? We have a proprietary codebase with hundreds of 1st-party C++ cmake-library targets, and hundreds more 3rd party libraries.
In theory it seems like I could set up a multi-group autogen with the goal of studying the codebase and being able to detect replicated functionalities, redundant implementations, detached spaghetti code, and answer "how do I?" questions.
But is that stretching the practical limitations of memory? Is this an undertaking that would be better achieved by extending training of a model like Llama2 or something with the codebase?
Beta Was this translation helpful? Give feedback.
All reactions