Which approach can I use to generate text based on multiple inputs?AI that can generate programsWhat is the machine learning approach based on human learning?Can anyone suggest a small application based on an Artificial Intelligence which can be done by a beginner in AI?Can we combine multiple different neural networks in one?Approach to classify a photo and extract text from itLoading multiple trained models for use in multi-agent environmentWhat methods are there to generate artificial training examples based on existing training examples?Which libraries can be used for image caption generation?Generate QA dataset from large text corpusCan GANs be used to generate matching pairs to inputs?
Does a Rogue's Evasion work for spells?
The Sword in the Stone
How to kill my goat in Goat Simulator
TSA asking to see cell phone
How to check what is edible on an alien world?
Why do all my history books divide Chinese history after the Han dynasty?
Is there a wealth gap in Boston where the median net worth of white households is $247,500 while the median net worth for black families was $8?
Can anyone give a concrete example to illustrate what is an uniform prior?
Why is it considered Acid Rain with pH <5.6
Polyhedra, Polyhedron, Polytopes and Polygon
May a man marry the women with whom he committed adultery?
What does "see" in "the Holy See" mean?
Am I allowed to use personal conversation as a source?
Sea level static test of an upper stage possible?
How could Nomadic scholars effectively memorize libraries worth of information
Assuring luggage isn't lost with short layover
Request for a Latin phrase as motto "God is highest/supreme"
Is there a reason why I should not use the HaveIBeenPwned API to warn users about exposed passwords?
What do I do with a party that is much stronger than their level?
Why isn't there any 9.5 digit multimeter or higher?
Are the named pipe created by `mknod` and the FIFO created by `mkfifo` equivalent?
How many oliphaunts died in all of the Lord of the Rings battles?
Old French song lyrics with the word "baiser."
How to avoid theft of potentially patentable IP when trying to obtain a Ph.D?
Which approach can I use to generate text based on multiple inputs?
AI that can generate programsWhat is the machine learning approach based on human learning?Can anyone suggest a small application based on an Artificial Intelligence which can be done by a beginner in AI?Can we combine multiple different neural networks in one?Approach to classify a photo and extract text from itLoading multiple trained models for use in multi-agent environmentWhat methods are there to generate artificial training examples based on existing training examples?Which libraries can be used for image caption generation?Generate QA dataset from large text corpusCan GANs be used to generate matching pairs to inputs?
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;
$begingroup$
I have a little experience in building various models, but I've never created anything like this, so just wondering if I can be pointed in the right direction.
I want to create (in python) a model which will generate text based on multiple inputs, varying from text input (vectorized) to timestamp and integer inputs.
For example, in the training data, the input might include:
eventType = ShotMade
shotType = 2
homeTeamScore = 2
awayTeamScore = 8
player = JR Smith
assist = George Hill
period = 1
and the output might be (possibly minus the hashtags):JR Smith under the basket for 2! 8-4 CLE. #NBAonBTV #ThisIsWhyWePlay #PlayByPlayEveryDay #NBAFinals
or
JR Smith out here doing #WhateverItTakes to make Cavs fans forgive him. #NBAFinals
Where is the best place to look to get a good knowledge of how to do this?
neural-networks deep-learning python generative-model
New contributor
$endgroup$
add a comment |
$begingroup$
I have a little experience in building various models, but I've never created anything like this, so just wondering if I can be pointed in the right direction.
I want to create (in python) a model which will generate text based on multiple inputs, varying from text input (vectorized) to timestamp and integer inputs.
For example, in the training data, the input might include:
eventType = ShotMade
shotType = 2
homeTeamScore = 2
awayTeamScore = 8
player = JR Smith
assist = George Hill
period = 1
and the output might be (possibly minus the hashtags):JR Smith under the basket for 2! 8-4 CLE. #NBAonBTV #ThisIsWhyWePlay #PlayByPlayEveryDay #NBAFinals
or
JR Smith out here doing #WhateverItTakes to make Cavs fans forgive him. #NBAFinals
Where is the best place to look to get a good knowledge of how to do this?
neural-networks deep-learning python generative-model
New contributor
$endgroup$
add a comment |
$begingroup$
I have a little experience in building various models, but I've never created anything like this, so just wondering if I can be pointed in the right direction.
I want to create (in python) a model which will generate text based on multiple inputs, varying from text input (vectorized) to timestamp and integer inputs.
For example, in the training data, the input might include:
eventType = ShotMade
shotType = 2
homeTeamScore = 2
awayTeamScore = 8
player = JR Smith
assist = George Hill
period = 1
and the output might be (possibly minus the hashtags):JR Smith under the basket for 2! 8-4 CLE. #NBAonBTV #ThisIsWhyWePlay #PlayByPlayEveryDay #NBAFinals
or
JR Smith out here doing #WhateverItTakes to make Cavs fans forgive him. #NBAFinals
Where is the best place to look to get a good knowledge of how to do this?
neural-networks deep-learning python generative-model
New contributor
$endgroup$
I have a little experience in building various models, but I've never created anything like this, so just wondering if I can be pointed in the right direction.
I want to create (in python) a model which will generate text based on multiple inputs, varying from text input (vectorized) to timestamp and integer inputs.
For example, in the training data, the input might include:
eventType = ShotMade
shotType = 2
homeTeamScore = 2
awayTeamScore = 8
player = JR Smith
assist = George Hill
period = 1
and the output might be (possibly minus the hashtags):JR Smith under the basket for 2! 8-4 CLE. #NBAonBTV #ThisIsWhyWePlay #PlayByPlayEveryDay #NBAFinals
or
JR Smith out here doing #WhateverItTakes to make Cavs fans forgive him. #NBAFinals
Where is the best place to look to get a good knowledge of how to do this?
neural-networks deep-learning python generative-model
neural-networks deep-learning python generative-model
New contributor
New contributor
edited 8 hours ago
nbro
5,6884 gold badges15 silver badges32 bronze badges
5,6884 gold badges15 silver badges32 bronze badges
New contributor
asked 9 hours ago
HdotHdot
162 bronze badges
162 bronze badges
New contributor
New contributor
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
Generally, text generators work by modeling the joint distribution of the text by its Bayesian forward decomposition
$
beginalign*
p(w_1, w_2, ..., w_n) &= p(w_1) * p(w_2|w_1) * p(w_3|w_2, w_1) * ... * p(w_n|w_i_i<n)\
&= prod_i=1^n p(w_i|w_k_k<i)\
endalign*
$
From a modeling perspective, this looks right up RNN's ally, where you can have a state holding information from $w_k_k<i$ to learn a representation of $w_i$
Now, in your specific case, you're interested in a conditional text-generator, so you are trying to model $p(w_1, w_2, ..., w_n | v_j_j)$, but this same tactic works.
$
beginalign*
p(w_1, w_2, ..., w_n| v_j_j) &= p(w_1|v_j_j) * p(w_2|w_1, v_j_j) * p(w_3|w_2, w_1, v_j_j) * ... * p(w_n|w_i_i<n, v_j_j)\
&= prod_i=1^n p(w_i|w_k_k<i, v_j_j)\
endalign*
$
So, in your RNN or forward-based model, you can use the exact same approach just additionally embed the conditional inputs you have and somehow infuse it into the model (in practice, I have seen this through attention, concatenation, or some other common approach).
My recommendation (depending on the computational power you have) is to take advantage of the recent fad of pre-trained language models. Specifically, ones trained on next word prediction will probably do the job best. A good example is gpt-2, and, if you check out their GitHub, their code is very readable and easy to adjust for adding conditional input in the ways I have described.
$endgroup$
add a comment |
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "658"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Hdot is a new contributor. Be nice, and check out our Code of Conduct.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fai.stackexchange.com%2fquestions%2f13633%2fwhich-approach-can-i-use-to-generate-text-based-on-multiple-inputs%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
Generally, text generators work by modeling the joint distribution of the text by its Bayesian forward decomposition
$
beginalign*
p(w_1, w_2, ..., w_n) &= p(w_1) * p(w_2|w_1) * p(w_3|w_2, w_1) * ... * p(w_n|w_i_i<n)\
&= prod_i=1^n p(w_i|w_k_k<i)\
endalign*
$
From a modeling perspective, this looks right up RNN's ally, where you can have a state holding information from $w_k_k<i$ to learn a representation of $w_i$
Now, in your specific case, you're interested in a conditional text-generator, so you are trying to model $p(w_1, w_2, ..., w_n | v_j_j)$, but this same tactic works.
$
beginalign*
p(w_1, w_2, ..., w_n| v_j_j) &= p(w_1|v_j_j) * p(w_2|w_1, v_j_j) * p(w_3|w_2, w_1, v_j_j) * ... * p(w_n|w_i_i<n, v_j_j)\
&= prod_i=1^n p(w_i|w_k_k<i, v_j_j)\
endalign*
$
So, in your RNN or forward-based model, you can use the exact same approach just additionally embed the conditional inputs you have and somehow infuse it into the model (in practice, I have seen this through attention, concatenation, or some other common approach).
My recommendation (depending on the computational power you have) is to take advantage of the recent fad of pre-trained language models. Specifically, ones trained on next word prediction will probably do the job best. A good example is gpt-2, and, if you check out their GitHub, their code is very readable and easy to adjust for adding conditional input in the ways I have described.
$endgroup$
add a comment |
$begingroup$
Generally, text generators work by modeling the joint distribution of the text by its Bayesian forward decomposition
$
beginalign*
p(w_1, w_2, ..., w_n) &= p(w_1) * p(w_2|w_1) * p(w_3|w_2, w_1) * ... * p(w_n|w_i_i<n)\
&= prod_i=1^n p(w_i|w_k_k<i)\
endalign*
$
From a modeling perspective, this looks right up RNN's ally, where you can have a state holding information from $w_k_k<i$ to learn a representation of $w_i$
Now, in your specific case, you're interested in a conditional text-generator, so you are trying to model $p(w_1, w_2, ..., w_n | v_j_j)$, but this same tactic works.
$
beginalign*
p(w_1, w_2, ..., w_n| v_j_j) &= p(w_1|v_j_j) * p(w_2|w_1, v_j_j) * p(w_3|w_2, w_1, v_j_j) * ... * p(w_n|w_i_i<n, v_j_j)\
&= prod_i=1^n p(w_i|w_k_k<i, v_j_j)\
endalign*
$
So, in your RNN or forward-based model, you can use the exact same approach just additionally embed the conditional inputs you have and somehow infuse it into the model (in practice, I have seen this through attention, concatenation, or some other common approach).
My recommendation (depending on the computational power you have) is to take advantage of the recent fad of pre-trained language models. Specifically, ones trained on next word prediction will probably do the job best. A good example is gpt-2, and, if you check out their GitHub, their code is very readable and easy to adjust for adding conditional input in the ways I have described.
$endgroup$
add a comment |
$begingroup$
Generally, text generators work by modeling the joint distribution of the text by its Bayesian forward decomposition
$
beginalign*
p(w_1, w_2, ..., w_n) &= p(w_1) * p(w_2|w_1) * p(w_3|w_2, w_1) * ... * p(w_n|w_i_i<n)\
&= prod_i=1^n p(w_i|w_k_k<i)\
endalign*
$
From a modeling perspective, this looks right up RNN's ally, where you can have a state holding information from $w_k_k<i$ to learn a representation of $w_i$
Now, in your specific case, you're interested in a conditional text-generator, so you are trying to model $p(w_1, w_2, ..., w_n | v_j_j)$, but this same tactic works.
$
beginalign*
p(w_1, w_2, ..., w_n| v_j_j) &= p(w_1|v_j_j) * p(w_2|w_1, v_j_j) * p(w_3|w_2, w_1, v_j_j) * ... * p(w_n|w_i_i<n, v_j_j)\
&= prod_i=1^n p(w_i|w_k_k<i, v_j_j)\
endalign*
$
So, in your RNN or forward-based model, you can use the exact same approach just additionally embed the conditional inputs you have and somehow infuse it into the model (in practice, I have seen this through attention, concatenation, or some other common approach).
My recommendation (depending on the computational power you have) is to take advantage of the recent fad of pre-trained language models. Specifically, ones trained on next word prediction will probably do the job best. A good example is gpt-2, and, if you check out their GitHub, their code is very readable and easy to adjust for adding conditional input in the ways I have described.
$endgroup$
Generally, text generators work by modeling the joint distribution of the text by its Bayesian forward decomposition
$
beginalign*
p(w_1, w_2, ..., w_n) &= p(w_1) * p(w_2|w_1) * p(w_3|w_2, w_1) * ... * p(w_n|w_i_i<n)\
&= prod_i=1^n p(w_i|w_k_k<i)\
endalign*
$
From a modeling perspective, this looks right up RNN's ally, where you can have a state holding information from $w_k_k<i$ to learn a representation of $w_i$
Now, in your specific case, you're interested in a conditional text-generator, so you are trying to model $p(w_1, w_2, ..., w_n | v_j_j)$, but this same tactic works.
$
beginalign*
p(w_1, w_2, ..., w_n| v_j_j) &= p(w_1|v_j_j) * p(w_2|w_1, v_j_j) * p(w_3|w_2, w_1, v_j_j) * ... * p(w_n|w_i_i<n, v_j_j)\
&= prod_i=1^n p(w_i|w_k_k<i, v_j_j)\
endalign*
$
So, in your RNN or forward-based model, you can use the exact same approach just additionally embed the conditional inputs you have and somehow infuse it into the model (in practice, I have seen this through attention, concatenation, or some other common approach).
My recommendation (depending on the computational power you have) is to take advantage of the recent fad of pre-trained language models. Specifically, ones trained on next word prediction will probably do the job best. A good example is gpt-2, and, if you check out their GitHub, their code is very readable and easy to adjust for adding conditional input in the ways I have described.
edited 6 hours ago
nbro
5,6884 gold badges15 silver badges32 bronze badges
5,6884 gold badges15 silver badges32 bronze badges
answered 8 hours ago
mshlismshlis
9351 silver badge14 bronze badges
9351 silver badge14 bronze badges
add a comment |
add a comment |
Hdot is a new contributor. Be nice, and check out our Code of Conduct.
Hdot is a new contributor. Be nice, and check out our Code of Conduct.
Hdot is a new contributor. Be nice, and check out our Code of Conduct.
Hdot is a new contributor. Be nice, and check out our Code of Conduct.
Thanks for contributing an answer to Artificial Intelligence Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fai.stackexchange.com%2fquestions%2f13633%2fwhich-approach-can-i-use-to-generate-text-based-on-multiple-inputs%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown