Dataset Viewer
Auto-converted to Parquet Duplicate
question
stringlengths
6
1.87k
choices
sequencelengths
4
4
answer
stringclasses
4 values
Which of the following scheduler policies are preemptive?
[ "FIFO (First In, First Out)", "SJF (Shortest Job First)", "STCF (Shortest Time to Completion First)", "I don't know" ]
C
Which of the following scheduler policies are preemptive?
[ "FIFO (First In, First Out)", "RR (Round Robin)", "SJF (Shortest Job First)", "I don't know" ]
B
Which of the following are correct implementation for acquire function ? Assume 0 means UNLOCKED and 1 means LOCKED. Initially l->locked = 0.
[ "c \n void acquire(struct lock *l)\n {\n if(cas(&l->locked, 0, 1) == 0)\n return;\n }", "c \n void acquire(struct lock *l)\n {\n for(;;)\n if(cas(&l->locked, 1, 0) == 1)\n return;\n }", "c \n void acquire(struct lock *l)\n {\n if(l->locked == 0) \n return;\n...
D
Assume a user program executes following tasks. Select all options that will use a system call.
[ "Encrypt \"Hello world\" by AES.", "Read the user's input \"Hello world\" from the keyboard.", "I don't know", "I don't know" ]
B
Assume a user program executes following tasks. Select all options that will use a system call.
[ "Write \"Hello world\" to a file.", "Encrypt \"Hello world\" by AES.", "I don't know", "I don't know" ]
A
Assume a user program executes following tasks. Select all options that will use a system call.
[ "Encrypt \"Hello world\" by AES.", "Send \"Hello world\" to another machine via Network Interface Card.", "I don't know", "I don't know" ]
B
What is the content of the inode?
[ "Filename", "String with the name of the owner", "File mode", "Capacity of the whole file system" ]
C
What is the content of the inode?
[ "String with the name of the owner", "Filename", "Capacity of the whole file system", "Hard links counter" ]
D
What is the content of the inode?
[ "Capacity of the whole file system", "Filename", "File size", "String with the name of the owner" ]
C
What is the content of the inode?
[ "Capacity of the whole file system", "Filename", "Index structure for data blocks", "String with the name of the owner" ]
C
In x86, what are the possible ways to transfer arguments when invoking a system call? For example, in the following code, string and len are sys_cputs’s arguments.
[ "Stack", "Instructions", "I don't know", "I don't know" ]
A
In x86, what are the possible ways to transfer arguments when invoking a system call? For example, in the following code, string and len are sys_cputs’s arguments.
[ "Registers", "Instructions", "I don't know", "I don't know" ]
A
What is the worst case complexity of listing files in a directory? The file system implements directories as hash-tables.
[ "$O(number of direntries in the file system)$", "$O(1)$", "$O(number of direntries in the directory)$", "$O(size of the file system)$" ]
C
In JOS, suppose one Env sends a page to another Env. Is the page copied?
[ "Yes", "No", "I don't know", "I don't know" ]
B
What are the drawbacks of non-preemptive scheduling compared to preemptive scheduling?
[ "It can lead to starvation especially for those real-time tasks", "Less computational resources need for scheduling and takes shorted time to suspend the running task and switch the context.", "I don't know", "I don't know" ]
A
What are the drawbacks of non-preemptive scheduling compared to preemptive scheduling?
[ "Bugs in one process can cause a machine to freeze up", "Less computational resources need for scheduling and takes shorted time to suspend the running task and switch the context.", "I don't know", "I don't know" ]
A
What are the drawbacks of non-preemptive scheduling compared to preemptive scheduling?
[ "Less computational resources need for scheduling and takes shorted time to suspend the running task and switch the context.", "It can lead to poor response time for processes", "I don't know", "I don't know" ]
B
Select valid answers about file descriptors (FD):
[ "FD is constructed by hashing the filename.", "The value of FD is unique for every file in the operating system.", "FD is usually used as an argument for read and write.", "I don't know" ]
C
Select valid answers about file descriptors (FD):
[ "The value of FD is unique for every file in the operating system.", "FD is constructed by hashing the filename.", "FDs are preserved after fork() and can be used in the new process pointing to the original files.", "I don't know" ]
C
Suppose a file system used only for reading immutable files in random fashion. What is the best block allocation strategy?
[ "Index allocation with B-tree", "Index allocation with Hash-table", "Continuous allocation", "Linked-list allocation" ]
C
Which of the following operations would switch the user program from user space to kernel space?
[ "Calling sin() in math library.", "Dividing integer by 0.", "I don't know", "I don't know" ]
B
Which of the following operations would switch the user program from user space to kernel space?
[ "Invoking read() syscall.", "Calling sin() in math library.", "I don't know", "I don't know" ]
A
Which of the following operations would switch the user program from user space to kernel space?
[ "Jumping to an invalid address.", "Calling sin() in math library.", "I don't know", "I don't know" ]
A
In which of the following cases does the TLB need to be flushed?
[ "Inserting a new page into the page table for kernel.", "Deleting a page from the page table.", "Inserting a new page into the page table for a user-space application.", "I don't know" ]
B
In which of the following cases does the TLB need to be flushed?
[ "Inserting a new page into the page table for a user-space application.", "Inserting a new page into the page table for kernel.", "Changing the read/write permission bit in the page table.", "I don't know" ]
C
In x86, select all synchronous exceptions?
[ "Divide error", "Timer", "Keyboard", "I don't know" ]
A
In x86, select all synchronous exceptions?
[ "Keyboard", "Page Fault", "Timer", "I don't know" ]
B
Once paging is enabled, load instruction / CR3 register / Page Table entry uses Virtual or Physical address?
[ "Virtual / Physical / Physical", "Physical / Physical / Virtual", "Virtual / Virtual / Virtual", "Physical / Physical / Physical" ]
A
Which of the execution of an application are possible on a single-core machine?
[ "Both concurrent and parallel execution", "Parallel execution", "Concurrent execution", "Neither concurrent or parallel execution" ]
C
In an x86 multiprocessor system with JOS, select all the correct options. Assume every Env has a single thread.
[ "One Env could run on two different processors simultaneously.", "Two Envs could run on the same processor simultaneously.", "Two Envs could run on two different processors simultaneously.", "I don't know" ]
C
In an x86 multiprocessor system with JOS, select all the correct options. Assume every Env has a single thread.
[ "One Env could run on two different processors simultaneously.", "Two Envs could run on the same processor simultaneously.", "One Env could run on two different processors at different times.", "I don't know" ]
C
In JOS, suppose a value is passed between two Envs. What is the minimum number of executed system calls?
[ "1", "2", "3", "4" ]
B
What strace tool does?
[ "To trace a symlink. I.e. to find where the symlink points to.", "To remove wildcards from the string.", "It prints out system calls for given program. These system calls are always called when executing the program.", "It prints out system calls for given program. These systems calls are called only for that...
D
Consider the following context-free grammar \(G\) (where \(\text{S}\) is the top-level symbol): \(R_{01}: \text{S} \rightarrow \text{NP VP}\) \(R_{02}: \text{NP} \rightarrow \text{NP0}\) \(R_{03}: \text{NP} \rightarrow \text{Det NP0}\) \(R_{04}: \text{NP0} \rightarrow \text{N}\) \(R_{05}: \text{NP0} \rightarrow \text{Adj N}\) \(R_{06}: \text{NP0} \rightarrow \text{NP0 PNP}\) \(R_{07}: \text{VP} \rightarrow \text{V}\) \(R_{08}: \text{VP} \rightarrow \text{V NP}\) \(R_{09}: \text{VP} \rightarrow \text{V NP PNP}\) \(R_{10}: \text{PNP} \rightarrow \text{Prep NP}\) complemented by the lexicon \(L\): a : Det blue : Adj, N drink : N, V drinks : N, V friends : N from : Prep gave : V letter : N my : Det neighbor : N nice : Adj, N of : Prep postman : N ran : V the : Det to : PrepHow many (syntactic and lexical) rules does the extended Chomsky Normal Form grammar equivalent to \(G\) contain, if produced as described in the parsing lecture?
[ "11 rules", "48 rules", "the grammar \\(G\\) already is in extended Chomsky Normal Form", "the grammar \\(G\\) cannot be converted to extended Chomsky Normal Form" ]
A
Consider the following context-free grammar \(G\) (where \(\text{S}\) is the top-level symbol): \(R_{01}: \text{S} \rightarrow \text{NP VP}\) \(R_{02}: \text{NP} \rightarrow \text{NP0}\) \(R_{03}: \text{NP} \rightarrow \text{Det NP0}\) \(R_{04}: \text{NP0} \rightarrow \text{N}\) \(R_{05}: \text{NP0} \rightarrow \text{Adj N}\) \(R_{06}: \text{NP0} \rightarrow \text{NP0 PNP}\) \(R_{07}: \text{VP} \rightarrow \text{V}\) \(R_{08}: \text{VP} \rightarrow \text{V NP}\) \(R_{09}: \text{VP} \rightarrow \text{V NP PNP}\) \(R_{10}: \text{PNP} \rightarrow \text{Prep NP}\) complemented by the lexicon \(L\): a : Det blue : Adj, N drink : N, V drinks : N, V friends : N from : Prep gave : V letter : N my : Det neighbor : N nice : Adj, N of : Prep postman : N ran : V the : Det to : PrepHow many (syntactic and lexical) rules does the extended Chomsky Normal Form grammar equivalent to \(G\) contain, if produced as described in the parsing lecture?
[ "the grammar \\(G\\) already is in extended Chomsky Normal Form", "48 rules", "the grammar \\(G\\) cannot be converted to extended Chomsky Normal Form", "31 rules" ]
D
Select the answer that correctly describes the differences between formal and natural languages. 
[ "Formal languages are by construction implicit and non-ambiguous while natural languages are explicit and ambiguous", "Formal languages are by construction explicit and ambiguous while natural languages are implicit and non-ambiguous", "Formal languages are by construction explicit and non-ambiguous while natur...
C
Which of the following are parameters involved in the choice made by an order-1 HMM model for PoS tagging knowing that its output isthis/Pron is/V a/Det good/Adj question/Nand that neither "is" nor "question" can be adjectives, and that "question" can also not be a determiner.(Penalty for wrong ticks.)
[ "P(question|N)", "P(this)", "P(Adj|a)", "P(this V)" ]
A
Which of the following are parameters involved in the choice made by an order-1 HMM model for PoS tagging knowing that its output isthis/Pron is/V a/Det good/Adj question/Nand that neither "is" nor "question" can be adjectives, and that "question" can also not be a determiner.(Penalty for wrong ticks.)
[ "P(N|question)", "P(this V)", "P(Pron)", "P(Adj|V Det)" ]
C
Which of the following are parameters involved in the choice made by an order-1 HMM model for PoS tagging knowing that its output isthis/Pron is/V a/Det good/Adj question/Nand that neither "is" nor "question" can be adjectives, and that "question" can also not be a determiner.(Penalty for wrong ticks.)
[ "P(Adj|Det)", "P(Pron is)", "P(this)", "P(question|Adj)" ]
A
For each of the sub-questions of this question (next page), tick/check the corresponding box if the presented sentence is correct at the corresponding level (for a human). There will be a penalty for wrong boxes ticked/checked.Some sentences is hard understand to.
[ "lexical", "none of the above is correct", "pragmatic", "syntactic" ]
A
Select the morpho-syntactic categories that do not carry much semantic content and are thus usually filtered-out from indexing.
[ "Adjectives", "Nouns", "Determiners", "Verbs" ]
C
Select the morpho-syntactic categories that do not carry much semantic content and are thus usually filtered-out from indexing.
[ "Adjectives", "Verbs", "Nouns", "Conjunctions" ]
D
Consider the following lexicon \(L\): boy : Adj, N boys : N blue : Adj, N drink : N, V drinks : N, V Nice : Adj, N When using an order-1 HMM model (using \(L\)) to tag the word sequence:"Nice boys drink blue drinks"does the tag of drink depend on the tag of nice?
[ "yes, because the HMM approach relies on a global maximum.", "no, the hypotheses make the two tags independent from each other.", "I don't know", "I don't know" ]
B
Consider the following context-free grammar \(G\) (where \(\text{S}\) is the top-level symbol): \(R_{01}: \text{S} \rightarrow \text{NP VP}\) \(R_{02}: \text{NP} \rightarrow \text{NP0}\) \(R_{03}: \text{NP} \rightarrow \text{Det NP0}\) \(R_{04}: \text{NP0} \rightarrow \text{N}\) \(R_{05}: \text{NP0} \rightarrow \text{Adj N}\) \(R_{06}: \text{NP0} \rightarrow \text{NP0 PNP}\) \(R_{07}: \text{VP} \rightarrow \text{V}\) \(R_{08}: \text{VP} \rightarrow \text{V NP}\) \(R_{09}: \text{VP} \rightarrow \text{V NP PNP}\) \(R_{10}: \text{PNP} \rightarrow \text{Prep NP}\) complemented by the lexicon \(L\): a : Det blue : Adj, N drink : N, V drinks : N, V friends : N from : Prep gave : V letter : N my : Det neighbor : N nice : Adj, N of : Prep postman : N ran : V the : Det to : PrepIf the notation \(T(w)\) is used to refer to the rule \(T \rightarrow w\), which of the following correspond to valid derivations according to the grammar \(G\)?(Penalty for wrong ticks.)
[ "\\(R_{01}, R_{08}, R_{02}, R_{04}, \\text{N}(\\text{letter}), \\text{V}(\\text{ran}), R_{03}, \\text{Det}(\\text{the}), R_{04}, \\text{N}(\\text{drinks})\\)", "\\(R_{01}, R_{03}, \\text{Det}(\\text{a}), R_{05}, \\text{Adj}(\\text{blue}), \\text{N}(\\text{drink}), R_{07}, \\text{V}(\\text{ran})\\)", "\\(R_{01},...
B
Select all statements that are true.A penalty will be applied for any wrong answers.
[ "Syntactic ambiguity has no effect on the algorithmic complexity of parsers.", "The analyzer functionality of a parser determines the set of all possible associated syntactic structures for any syntactically correct sentence.", "For a sentence to be acceptable in general, it is sufficient to satisfy the positio...
B
Select all statements that are true.A penalty will be applied for any wrong answers.
[ "Syntactic ambiguity has no effect on the algorithmic complexity of parsers.", "For a sentence to be acceptable in general, it is sufficient to satisfy the positional and selectional constraints of a given language.", "The recognizer functionality of a parser decides if a given sequence of words is syntacticall...
C
Select all statements that are true.A penalty will be applied for any wrong answers.
[ "For a sentence to be acceptable in general, it is sufficient to satisfy the positional and selectional constraints of a given language.", "Syntactic ambiguity has no effect on the algorithmic complexity of parsers.", "Determining whether a sentence has a pragmatic meaning depends on the context that is availab...
C
Select which statements are true about the CYK algorithm.A penalty will be applied for any incorrect answers.
[ "Its time complexity decreases when the grammar is regular.", "Its time complexity is \\( O(n^3) \\), where \\( n \\) is the length of sequence of words to be parsed.", "It is a top-down chart parsing algorithm.", "I don't know" ]
B
Select which statements are true about the CYK algorithm.A penalty will be applied for any incorrect answers.
[ "It is a top-down chart parsing algorithm.", "Its time complexity decreases when the grammar is regular.", "The Context-Free Grammar used with the CYK algorithm has to be converted into extended Chomsky normal form.", "I don't know" ]
C
Select which statements are true about the CYK algorithm.A penalty will be applied for any incorrect answers.
[ "It is a top-down chart parsing algorithm.", "It not only generates the syntactic interpretations of the sequence to be analyzed but also generates the syntactic interpretations of all the sub-sequences of the sequence to be analyzed.", "Its time complexity decreases when the grammar is regular.", "I don't kn...
B
Select all statements that are true.A penalty will be applied for any wrong answers.
[ "Dependency grammars better describe positional constraints.", "Phrase-structure grammars are relatively better suited for fixed-order languages than free-order languages.", "Phrase-structure grammars better describe selectional constraints.", "The expressive power of context-free grammars are higher than tha...
B
Select all statements that are true.A penalty will be applied for any wrong answers.
[ "The expressive power of context-free grammars are higher than that of context-dependent grammars.", "Dependency grammars better describe positional constraints.", "Dependency grammars describe functional dependencies between words in a sequence.", "Phrase-structure grammars better describe selectional constr...
C
Select all statements that are true.A penalty will be applied for any wrong answers.
[ "Phrase-structure grammars better describe selectional constraints.", "The expressive power of context-free grammars are higher than that of context-dependent grammars.", "Any context-free grammar can be transformed into Chomsky-Normal form.", "Dependency grammars better describe positional constraints." ]
C
For this question, one or more assertions can be correct. Tick only the correct assertion(s). There will be a penalty for wrong assertions ticked.Which of the following associations can be considered as illustrative examples for inflectional morphology (with here the simplifying assumption that canonical forms are restricted to the roots only)?
[ "(hypothesis, hypotheses)", "(activate, action)", "(speaking, talking)", "I don't know" ]
A
For this question, one or more assertions can be correct. Tick only the correct assertion(s). There will be a penalty for wrong assertions ticked.Which of the following associations can be considered as illustrative examples for inflectional morphology (with here the simplifying assumption that canonical forms are restricted to the roots only)?
[ "(to go, went)", "(speaking, talking)", "(activate, action)", "I don't know" ]
A
Consider the following lexicon \(L\): bear : V, N bears : V, N blue : Adj, N drink : N, V drinks : N, V Nice : Adj, N When using an order-1 HMM model (using \(L\)) to tag the word sequence:"Nice bears drink blue drinks"does the tag of drink depend on the tag of nice?
[ "yes, because the HMM approach relies on a global maximum.", "no, the hypotheses make the two tags independent from each other.", "I don't know", "I don't know" ]
A
What could Out of Vocabulary (OoV) forms consist of? Select all that apply.A penalty will be applied for wrong answers.
[ "Words borrowed from other languages", "Words from the lexicon", "I don't know", "I don't know" ]
A
What could Out of Vocabulary (OoV) forms consist of? Select all that apply.A penalty will be applied for wrong answers.
[ "Words from the lexicon", "Words with spelling errors", "I don't know", "I don't know" ]
B
What could Out of Vocabulary (OoV) forms consist of? Select all that apply.A penalty will be applied for wrong answers.
[ "Words from the lexicon", "Neologisms", "I don't know", "I don't know" ]
B
What could Out of Vocabulary (OoV) forms consist of? Select all that apply.A penalty will be applied for wrong answers.
[ "Words from the lexicon", "Abbreviations", "I don't know", "I don't know" ]
B
Select all the statements that are true.A penalty will be applied for any incorrect answers selected.
[ "Documents that are orthogonal to each other gives a cosine similarity measure of 1.", "The Luhn law states that if a set of words are ranked by the decreasing order of their frequencies, the high-ranked words are the best features for identifying the topics that occur in the document collection.", "The order o...
C
Select all the statements that are true.A penalty will be applied for any incorrect answers selected.
[ "High values of document frequency means that the word is not very discriminative.", "The Luhn law states that if a set of words are ranked by the decreasing order of their frequencies, the high-ranked words are the best features for identifying the topics that occur in the document collection.", "Documents tha...
A
Select all the statements that are true.A penalty will be applied for any incorrect answers selected.
[ "Documents that are orthogonal to each other gives a cosine similarity measure of 1.", "Cosine similarity is independent of the length of the documents.", "The Luhn law states that if a set of words are ranked by the decreasing order of their frequencies, the high-ranked words are the best features for identify...
B
Which of the following statements are true?
[ "Training a $k$-nearest-neighbor classifier takes more computational time than applying it / using it for prediction.", "k-nearest-neighbors cannot be used for regression.", "The more training examples, the more accurate the prediction of a $k$-nearest-neighbor classifier.", "I don't know" ]
C
Which of the following statements are true?
[ "k-nearest-neighbors cannot be used for regression.", "A $k$-nearest-neighbor classifier is sensitive to outliers.", "Training a $k$-nearest-neighbor classifier takes more computational time than applying it / using it for prediction.", "I don't know" ]
B
Let $n$ be an integer such that $n\geq 2$ and let $A \in \R^{n imes n}$, and $xv \in \R^n$, consider the function $f(xv) = xv^ op A xv$ defined over $\R^n$. Which of the following is the gradient of the function $f$?
[ "$2A^\top xv$", "$A^\top xv + Axv$$\nabla f (xv)= A^\top xv+Axv$. Here the matrix $A$ is not symmetric.", "$2Axv$", "$A^\top xv + Axv$" ]
B
Consider a classification problem using either SVMs or logistic regression and separable data. For logistic regression we use a small regularization term (penality on weights) in order to make the optimum welldefined. Consider a point that is correctly classified and distant from the decision boundary. Assume that we move this point slightly. What will happen to the decision boundary?
[ "Small change for SVMs and small change for logistic regression.", "No change for SVMs and a small change for logistic regression.", "No change for SVMs and no change for logistic regression.", "Small change for SVMs and large change for logistic regression." ]
B
You are given a distribution on $X, Y$, and $Z$ and you know that the joint distribution can be written in the form $p(x, y, z)=p(x) p(y \mid x) p(z \mid y)$. What conclusion can you draw? [Recall that $\perp$ means independent and $\mid \cdots$ means conditioned on $\cdots$.
[ "$X \\perp Z$", "$Y \\perp Z$", "$X \\perp Z \\quad \\mid Y$", "$X \\perp Y \\mid Z$" ]
C
Consider our standard least-squares problem $$ \operatorname{argmin}_{\mathbf{w}} \mathcal{L}(\mathbf{w})=\operatorname{argmin}_{\mathbf{w}} \frac{1}{2} \sum_{n=1}^{N}\left(y_{n}-\mathbf{x}_{n}^{\top} \mathbf{w}\right)^{2}+\frac{\lambda}{2} \sum_{d=1}^{D} w_{d}^{2} $$ Here, $\left\{\left(\mathbf{x}_{n} y_{n}\right)\right\}_{n=1}^{N}$ is the data. The $N$-length vector of outputs is denoted by $\mathbf{y}$. The $N \times D$ data matrix is called $\mathbf{X}$. It's rows contain the tuples $\mathbf{x}_{n}$. Finally, the parameter vector of length $D$ is called $\mathbf{w}$. (All just like we defined in the course). Mark any of the following formulas that represent an equivalent way of solving this problem.
[ "$\\operatorname{argmin}_{\\boldsymbol{\\alpha}} \\frac{1}{2} \\boldsymbol{\\alpha}^{\\top}\\left(\\mathbf{X X}^{\\top}+\\lambda \\mathbf{I}_{N}\\right) \\boldsymbol{\\alpha}-\\boldsymbol{\\alpha}^{\\top} \\mathbf{y}$", "$\\operatorname{argmin}_{\\mathbf{w}} \\sum_{n=1}^{N}\\left[1-y_{n} \\mathbf{x}_{n}^{\\top} \...
A
Consider our standard least-squares problem $$ \operatorname{argmin}_{\mathbf{w}} \mathcal{L}(\mathbf{w})=\operatorname{argmin}_{\mathbf{w}} \frac{1}{2} \sum_{n=1}^{N}\left(y_{n}-\mathbf{x}_{n}^{\top} \mathbf{w}\right)^{2}+\frac{\lambda}{2} \sum_{d=1}^{D} w_{d}^{2} $$ Here, $\left\{\left(\mathbf{x}_{n} y_{n}\right)\right\}_{n=1}^{N}$ is the data. The $N$-length vector of outputs is denoted by $\mathbf{y}$. The $N \times D$ data matrix is called $\mathbf{X}$. It's rows contain the tuples $\mathbf{x}_{n}$. Finally, the parameter vector of length $D$ is called $\mathbf{w}$. (All just like we defined in the course). Mark any of the following formulas that represent an equivalent way of solving this problem.
[ "$\\square \\operatorname{argmin}_{\\mathbf{w}} \\frac{1}{2} \\sum_{n=1}^{N} \\ln \\left(1+e^{\\mathbf{x}_{n}^{\\top} \\mathbf{w}}\\right)-y_{n} \\mathbf{x}_{n}^{\\top} \\mathbf{w}$", "$\\operatorname{argmin}_{\\mathbf{w}} \\sum_{n=1}^{N}\\left[1-y_{n} \\mathbf{x}_{n}^{\\top} \\mathbf{w}\\right]_{+}+\\frac{\\lamb...
C
Consider our standard least-squares problem $$ \operatorname{argmin}_{\mathbf{w}} \mathcal{L}(\mathbf{w})=\operatorname{argmin}_{\mathbf{w}} \frac{1}{2} \sum_{n=1}^{N}\left(y_{n}-\mathbf{x}_{n}^{\top} \mathbf{w}\right)^{2}+\frac{\lambda}{2} \sum_{d=1}^{D} w_{d}^{2} $$ Here, $\left\{\left(\mathbf{x}_{n} y_{n}\right)\right\}_{n=1}^{N}$ is the data. The $N$-length vector of outputs is denoted by $\mathbf{y}$. The $N \times D$ data matrix is called $\mathbf{X}$. It's rows contain the tuples $\mathbf{x}_{n}$. Finally, the parameter vector of length $D$ is called $\mathbf{w}$. (All just like we defined in the course). Mark any of the following formulas that represent an equivalent way of solving this problem.
[ "$\\operatorname{argmin}_{\\mathbf{w}} \\frac{1}{2}\\|\\mathbf{y}-\\mathbf{X} \\mathbf{w}\\|^{2}+\\frac{\\lambda}{2}\\|\\mathbf{w}\\|^{2}$", "$\\square \\operatorname{argmin}_{\\mathbf{w}} \\frac{1}{2} \\sum_{n=1}^{N} \\ln \\left(1+e^{\\mathbf{x}_{n}^{\\top} \\mathbf{w}}\\right)-y_{n} \\mathbf{x}_{n}^{\\top} \\ma...
A
In Text Representation learning, which of the following statements is correct?
[ "Learning GloVe vectors can be done using SGD in a streaming fashion, by streaming through the input text only once.", "Every recommender systems algorithm for learning a matrix factorization $\\boldsymbol{W} \\boldsymbol{Z}^{\\top}$ approximating the observed entries in least square sense does also apply to lear...
B
In Text Representation learning, which of the following statements is correct?
[ "If you fix all word vectors, and only train the remaining parameters, then FastText in the two-class case reduces to being just a linear classifier.", "Learning GloVe vectors can be done using SGD in a streaming fashion, by streaming through the input text only once.", "FastText performs unsupervised learning ...
A
Consider the following joint distribution on $X$ and $Y$, where both random variables take on the values $\{0,1\}: p(X=$ $0, Y=0)=0.1, p(X=0, Y=1)=0.2, p(X=1, Y=0)=0.3, p(X=1, Y=1)=0.4$. You receive $X=1$. What is the largest probability of being correct you can achieve when predicting $Y$ in this case?
[ "$1$", "$\\frac{1}{3}$", "$0$", "$\\frac{4}{7}$" ]
D
Which of the following statements are correct?
[ "One iteration of standard SGD for logistic regression costs roughly $\\Theta(N D)$, where $N$ is the number of samples and $D$ is the dimension.", "Hinge loss (as in SVMs) is typically preferred over L2 loss (least squares loss) in classification tasks.", "In PCA, the first principal direction is the eigenvect...
B
Which of the following statements are correct?
[ "MSE (mean squared error) is typically more sensitive to outliers than MAE (mean absolute error)", "One iteration of standard SGD for SVM costs roughly $\\Theta(D)$, where $D$ is the dimension.", "One iteration of standard SGD for logistic regression costs roughly $\\Theta(N D)$, where $N$ is the number of samp...
A
Which of the following statements are correct?
[ "One iteration of standard SGD for SVM costs roughly $\\Theta(D)$, where $D$ is the dimension", "One iteration of standard SGD for SVM costs roughly $\\Theta(D)$, where $D$ is the dimension.", "In PCA, the first principal direction is the eigenvector of the data matrix $\\boldsymbol{X}$ with largest associated ...
A
(Backpropagation) Training via the backpropagation algorithm always learns a globally optimal neural network if there is only one hidden layer and we run an infinite number of iterations and decrease the step size appropriately over time.
[ "True", "False", "I don't know", "I don't know" ]
B
Which of the following statements about the $\mathrm{SVD}$ of an $N \times D$ matrix $\mathbf{X}$ are correct?
[ "We can compute the singular values of $\\mathbf{X}$ by computing the eigenvalues of $\\mathbf{X X}^{\\top}$. This has complexity $O\\left(D^{3}\\right)$.", "We can compute the singular values of $\\mathbf{X}$ by computing the eigenvalues of $\\mathbf{X}^{\\top} \\mathbf{X}$. This has complexity $O\\left(N^{3}\\r...
D
Which of the following statements about the $\mathrm{SVD}$ of an $N \times D$ matrix $\mathbf{X}$ are correct?
[ "We can compute the singular values of $\\mathbf{X}$ by computing the eigenvalues of $\\mathbf{X X}^{\\top}$. This has complexity $O\\left(N^{3}\\right)$.", "We can compute the singular values of $\\mathbf{X}$ by computing the eigenvalues of $\\mathbf{X}^{\\top} \\mathbf{X}$. This has complexity $O\\left(N^{3}\\r...
A
Consider a linear regression problem with $N$ samples where the input is in $D$-dimensional space, and all output values are $y_{i} \in\{-1,+1\}$. Which of the following statements is correct?
[ "(c)", "(b) linear regression cannot \"work\" if $N \\ll D$", "(a) linear regression cannot \"work\" if $N \\gg D$", "(c) linear regression can be made to work perfectly if the data is linearly separable" ]
A
Consider a matrix factorization problem of the form $\mathbf{X}=\mathbf{W Z}^{\top}$ to obtain an item-user recommender system where $x_{i j}$ denotes the rating given by $j^{\text {th }}$ user to the $i^{\text {th }}$ item . We use Root mean square error (RMSE) to gauge the quality of the factorization obtained. Select the correct option.
[ "For obtaining a robust factorization of a matrix $\\mathbf{X}$ with $D$ rows and $N$ elements where $N \\ll D$, the latent dimension $\\mathrm{K}$ should lie somewhere between $D$ and $N$.", "Given a new item and a few ratings from existing users, we need to retrain the already trained recommender system from sc...
C
Consider the composite function $f(x)=g(h(x))$, where all functions are $\mathbb{R}$ to $\mathbb{R}$. Which of the following is the weakest condition that guarantees that $f(x)$ is convex?
[ "$g(x)$ and $h(x)$ are convex and $g(x)$ and $h(x)$ are increasing", "$g(x)$ and $h(x)$ are convex and $g(x)$ is increasing", "$g(x)$ and $h(x)$ are convex and $h(x)$ is increasing", "$g(x)$ is convex and $h(x)$ is increasing" ]
B
Matrix Factorizations: The function $f(\mathbf{v}):=g\left(\mathbf{v} \mathbf{v}^{\top}\right)$ is convex over the vectors $\mathbf{v} \in \mathbb{R}^{2}$, when $g: \mathbb{R}^{2 \times 2} \rightarrow \mathbb{R}$ is defined as
[ "(b) if we define $g: \\mathbb{R}^{2 \\times 2} \\rightarrow \\mathbb{R}$ as $g(\\mathbf{X}):=X_{11}+X_{22}$.", "(a)", "(a) if we define $g: \\mathbb{R}^{2 \\times 2} \\rightarrow \\mathbb{R}$ as $g(\\mathbf{X}):=X_{11}$.", "I don't know" ]
B
Matrix Factorizations: The function $f(\mathbf{v}):=g\left(\mathbf{v} \mathbf{v}^{\top}\right)$ is convex over the vectors $\mathbf{v} \in \mathbb{R}^{2}$, when $g: \mathbb{R}^{2 \times 2} \rightarrow \mathbb{R}$ is defined as
[ "(b) if we define $g: \\mathbb{R}^{2 \\times 2} \\rightarrow \\mathbb{R}$ as $g(\\mathbf{X}):=X_{11}+X_{22}$.", "(b)", "(a) if we define $g: \\mathbb{R}^{2 \\times 2} \\rightarrow \\mathbb{R}$ as $g(\\mathbf{X}):=X_{11}$.", "I don't know" ]
B
Our task is to classify whether an animal is a dog (class 0) or a cat (class 1) based on the following features: egin{itemize} \item $x_1$: height \item $x_2$: length of whiskers \item $x_3$: thickness of fur \end{itemize} We perform standard normal scaling on the training features so that they have a mean of zero and standard deviation of 1. We have trained a Logistic Regression model to determine the probability that the animal is a cat, $p(1 | \mathbf{x,w})$. Our classifier learns that cats have a lower height and longer whiskers than dogs, while the thickness of fur is not relevant to the classification outcome. Which of the following is true about the weights~$\wv$ learned by the classifier?
[ "$w_1 < w_3 < w_2$$w_1 < w_3 < w_2$. When the features are standardized, a below-average height $x_1$ becomes negative. Negative heights increase the probability that the animal is a cat, so the height and cat probability are inversely correlated, and therefore $w_1 < 0$. Conversely, a positive whisker length $x_2$...
A
Consider two fully connected networks, A and B, with a constant width for all layers, inputs and outputs. Network A has depth $3L$ and width $H$, network B has depth $L$ and width $2H$. Everything else is identical for the two networks and both $L$ and $H$ are large. In this case, performing a single iteration of backpropagation requires fewer scalar multiplications for network A than for network B.
[ "True", "TrueTrue.\n\t\tThe number of multiplications required for backpropagation is linear in the depth and quadratic in the width, $3LH^2 < L (2H)^2 = 4LH^2$.", "False", "I don't know" ]
B
Consider the loss function $L: \R^d o \R$, $L(\wv) = rac{eta}{2}\|\wv\|^2$, where $eta > 0$ is a constant. We run gradient descent on $L$ with a stepsize $\gamma > 0$ starting from some $\wv_0 eq 0$. Which of the statements below is true?
[ "Gradient descent converges to the global minimum for any stepsize in the interval $\\gamma \\in \big( 0, \frac{2}{\beta}\big)$.The update rule is $\\wv_{t+1} = \\wv_t - \\gamma\beta \\wv_t = (1 - \\gamma\beta) \\wv_t$. Therefore we have that the sequence $\\{\\|\\wv_{t}\\|\\}_t$ is given by $\\|\\wv_{t+1}\\| = \\l...
A
You are doing your ML project. It is a regression task under a square loss. Your neighbor uses linear regression and least squares. You are smarter. You are using a neural net with 10 layers and activations functions $f(x)=3 x$. You have a powerful laptop but not a supercomputer. You are betting your neighbor a beer at Satellite who will have a substantially better scores. However, at the end it will essentially be a tie, so we decide to have two beers and both pay. What is the reason for the outcome of this bet?
[ "Because I should have used only one layer.", "Because we use exactly the same scheme.", "Because it is almost impossible to train a network with 10 layers without a supercomputer.", "Because I should have used more layers." ]
B
Let $f:\R^D ightarrow\R$ be an $L$-hidden layer multi-layer perceptron (MLP) such that \[ f(xv)=\sigma_{L+1}ig(\wv^ op\sigma_L(\Wm_L\sigma_{L-1}(\Wm_{L-1}\dots\sigma_1(\Wm_1xv)))ig), \] with $\wv\in\R^{M}$, $\Wm_1\in\R^{M imes D}$ and $\Wm_\ell\in\R^{M imes M}$ for $\ell=2,\dots, L$, and $\sigma_i$ for $i=1,\dots,L+1$ is an entry-wise activation function. For any MLP $f$ and a classification threshold $ au$ let $C_{f, au}$ be a binary classifier that outputs YES for a given input $xv$ if $f(xv) \leq au$ and NO otherwise. space{3mm} Assume $\sigma_{L+1}$ is the element-wise extbf{sigmoid} function and $C_{f, rac{1}{2}}$ is able to obtain a high accuracy on a given binary classification task $T$. Let $g$ be the MLP obtained by multiplying the parameters extbf{in the last layer} of $f$, i.e. $\wv$, by 2. Moreover, let $h$ be the MLP obtained by replacing $\sigma_{L+1}$ with element-wise extbf{ReLU}. Finally, let $q$ be the MLP obtained by doing both of these actions. Which of the following is true? ReLU(x) = max\{x, 0\} \ Sigmoid(x) = rac{1}{1 + e^{-x}}
[ "$C_{g, \frac{1}{2}}$ may have an accuracy significantly lower than $C_{f, \frac{1}{2}}$ on $T$", "$C_{g, \frac{1}{2}}$, $C_{h, 0}$, and $C_{q, 0}$ have the same accuracy as $C_{f, \frac{1}{2}}$ on $T$\n Since the threshold $\frac{1}{2}$ for sigmoid corresponds to the input to the last activation ...
B
Which of the following is correct regarding Louvain algorithm?
[ "It creates a hierarchy of communities with a common root", "Clique is the only topology of nodes where the algorithm detects the same communities, independently of the starting point", "Modularity is always maximal for the communities found at the top level of the community hierarchy", "If n cliques of the s...
D
Let the first four retrieved documents be N N R R, where N denotes a non-relevant and R a relevant document. Then the MAP (Mean Average Precision) is:
[ "1/2", "7/24", "5/12", "3/4" ]
C
Which of the following is true?
[ "High precision implies low recall", "High recall implies low precision", "High precision hurts recall", "I don't know" ]
C
Which of the following is true?
[ "High recall hurts precision", "High recall implies low precision", "High precision implies low recall", "I don't know" ]
A
The inverse document frequency of a term can increase
[ "by removing a document from the document collection that does not contain the term", "by adding a document to the document collection that does not contain the term", "by adding a document to the document collection that contains the term", "by adding the term to a document that contains the term" ]
B
In a Ranked Retrieval result, the result at position k is non-relevant and at k+1 is relevant. Which of the following is always true (P@k and R@k are the precision and recall of the result set consisting of the k top ranked documents)?
[ "P@k-1 > P@k+1", "P@k-1 = P@k+1", "R@k-1 = R@k+1", "R@k-1 < R@k+1" ]
D
What is true regarding Fagin's algorithm?
[ "It never reads more than (kn)½ entries from a posting list", "Posting files need to be indexed by TF-IDF weights", "It provably returns the k documents with the largest aggregate scores", "It performs a complete scan over the posting files" ]
C
Which of the following is WRONG for Ontologies?
[ "Different information systems need to agree on the same ontology in order to interoperate.", "They dictate how semi-structured data are serialized.", "They give the possibility to specify schemas for different domains.", "They help in the integration of data expressed in different models." ]
B
What is the benefit of LDA over LSI?
[ "LSI is sensitive to the ordering of the words in a document, whereas LDA is not", "LSI is based on a model of how documents are generated, whereas LDA is not", "LDA represents semantic dimensions (topics, concepts) as weighted combinations of terms, whereas LSI does not", "LDA has better theoretical explanat...
D
Maintaining the order of document identifiers for vocabulary construction when partitioning the document collection is important
[ "in the map-reduce approach for parallel clusters", "in neither of the two", "in the index merging approach for single node machines", "in both" ]
C
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
3