[02:26:51] <entropyfails> Heya, a quick hunt through docs and whatnot didn't help me as much but I wanted to see how many hits a particular collection is getting. the log is mostly filled with initandlisten with no real data in it. Is their something to increase the log verbosity so I can get an idea of what queries get run more often?
[03:43:56] <Oddman> big up 10gen and their free online courses
[04:50:23] <tpae> i think you should check that also
[04:50:38] <scrappy> object is the return value from a .find() operation, and the find() was successful
[04:51:30] <tpae> hmm.. i don't see anything wrong :/
[04:52:18] <crudson> scrappy: if you have the full object already and are passing it in update to locate the document, then just change 'somekey' and call .save(object)
[04:54:42] <crudson> otherwise we need more details about what object and value are to diagnose this. if it's the result of a 'find' then it will be a cursor, which is invalid to pass back to update
[04:55:53] <scrappy> "value" is just this, in javascript: var value = "foo";
[04:56:05] <scrappy> but your point about the cursur being invalid sure is important.
[04:56:26] <scrappy> invalid as a param to update, that is.
[04:59:15] <crudson> so it can't be the exact result of find. Either findOne or a specific document in the result. But you can just issue the query you used to locate the document in the first place in update(), rather than getting it and using that.
[05:01:19] <scrappy> simply done, crudson. and now mongo says field names cannot start with $ [set]
[07:24:54] <sunfmin> I have `readers` and `posts`, I want to track which reader have already read which post. and sort the posts depend on the logged in reader. to make unread posts on the top. and with in unread and read posts by published time.
[07:24:55] <sunfmin> How should I store it in mongodb?
[08:50:47] <NodeX> ANyone know if mongo has a cap on the number of objects allowed in a field ... for example http://pastie.org/5034038 <-----
[08:51:03] <NodeX> I had 45 or so and it was not returning data as expected
[10:59:45] <jasiek> hey guys. i'm having an odd issue with a mongo instance, and i can't understand why it's happening.
[11:00:16] <jasiek> i have a script that goes over several million records and updates some of them
[11:00:34] <jasiek> occasionally (3 times in 20M), i get this message from the ruby driver: could not obtain connection within 5.0 seconds. The max pool size is currently 1; consider increasing the pool size or timeout.
[11:00:53] <jasiek> i don't understand why mongo would drop the connection in the first place.
[11:01:53] <jasiek> i'm running mongo 2.2.0, and ruby-mongo 1.5.2
[11:08:50] <BlackPanx> is it possible to replicate only certain database in replicaset ?
[11:09:05] <BlackPanx> like with mysql for example... where you put him ignore table expression
[11:09:06] <DinMamma> Ive copied over a database with db.copyDatabase, on the source server all the copying is completed, the _id is build but the db.copyDatabase-command have yet to terminate(give me back the cursor). i run 2.2, anyone know why that might be?
[13:10:56] <fredix> question about c++ api : const char* toto = GridFSChunk::data(int len) give me the size of the chunk into the len, anyway strlen(toto) do no return the same length
[13:23:26] <Jt_-> i have a little problem, i must migrate my db
[13:23:45] <Jt_-> i-m using mongodump & mongorestore
[13:24:13] <Jt_-> but when mongorestore finished i have this error
[13:24:14] <Jt_-> Assertion: 13111:field not found, expected type 2
[13:24:43] <Jt_-> i read that is a index problem caused by different mongo version
[13:25:49] <Jt_-> i try to reindex after restore but when i dump newdb (with new index) and restore, the error persist
[13:27:36] <Jt_-> anyone have a idea to resolve this issue?
[14:08:09] <FriedBob> I've recently installed mongodb from packages on Ubuntu 12.04 LTS (mongod --version shows db version v2.0.4, pdfile version 4.5 ). For some reason, when I do "sudo service mongodb start" it starts, then immediately exits. If I do "sudo mongod" then it starts and runs. Any ideas on where to look for more info, or what might be the cause?
[14:11:08] <NodeX> check the log file and it will tell you
[14:12:16] <FriedBob> Ok, will take another look once I get this fire out. Such a shame all the things that get in the way of "real" work at times.
[14:19:53] <FriedBob> Would it be logging to anywhere other than /var/log/mongodb/mongodb.log ? That file didn't get touched when I started mongodb as a service.
[14:20:54] <FriedBob> Though I did see " mongodb main process (16496) terminated with status 2" in syslog
[14:28:43] <FriedBob> Looks like my puppet module is off.
[15:03:05] <cmacmurray> Good morning mongo users, I am writing some python mongo db management scripts, and I was wondering if someone could help me with figuring out the appropriate pymong command to manipulate replica sets via pymongo. specifically the rs.initiate() function, i have tried some stuff like this data = db.command('rs.initiate()') but it is not working, docs would be great
[17:00:08] <Ahlee> trying to write a map/reduce query that only emits a field if the record doesn't contain another bit of information
[17:00:39] <Ahlee> should it just be if (this.deleted : {$exists : false})? That doesn't appear to be valid
[17:00:58] <Ahlee> and I don't know how to tell the JS shell waht I mean. And that's the most frustrating.
[17:06:03] <crudson> Ahlee: why not include {field:{$exists:false}} in the initial map reduce query?
[17:07:11] <Ahlee> crudson: I didn't know you could :) My map/reduce experience is pretty minimal.
[17:07:55] <crudson> Ahlee: look at "Command Syntax" here to see what options you can give to map reduce: http://www.mongodb.org/display/DOCS/MapReduce
[17:09:53] <Ahlee> ah! so the map function would remain 'dumb' as only items matching query will be sent to it
[17:26:28] <codemagician> Are there any control scripts for mongod for Ubuntu server?
[17:27:41] <crudson> codemagician: not out of the box last time I saw
[18:03:18] <gdoko> Hi! I've noticed that MongoDB documentation states that sharding is performed on a per-collection basis. However, in the following open JIRA issue https://jira.mongodb.org/browse/SERVER-939 states (in comments) that if you have e.g. 1,000-collections in 1-database, then they will be in the same shard. Doesn't this imply a per-database sharding?
[18:27:12] <aster1sk_> Afternoon ladies and gentlemen.
[18:36:34] <aster1sk> I have a problem with my aggregate queries.
[18:36:41] <aster1sk> See this http://stats.dev.riboflav.in/app/issue_views
[18:36:52] <aster1sk> Click the query tab to see what I'm querying.
[18:37:37] <aster1sk> Without the $match I get (kind of) the ruslt I want, but I can't $lt / $gt on date [d]
[18:40:00] <aster1sk> Ahh, ok well give me the details of exactly what you need and I'll see if I can source it.
[18:48:40] <Austin__> I seem to be having a problem with the init script for mongodb 2.2 from the official repos when installing to Amazon EWS using one of their "Amazon Linux" (e.g., CentOS) images.
[18:49:21] <Austin__> After installing, I do "sudo service mongod start" and then get http://pastie.org/5036620
[18:49:43] <Austin__> if I kill everything shown in that pastie, and then start mongo the same way, it works.
[18:50:12] <Austin__> this is problematic for me because my automatic initialization scripts hang while waiting for Mongo to actually start.
[18:50:29] <Austin__> (killing everything shown in the pastie means logging in with a separate session)
[18:50:55] <Austin__> er, by automatic initialization I mean server build scripts. I'm attempting to build this unattended.
[18:51:45] <Austin__> I don't have this if I install mongo20-10gen-server
[18:56:20] <Austin__> Is anyone familiar with this sort of behaviour?
[18:57:15] <aster1sk> Bahh... http://stats.dev.riboflav.in/app/issue_views -> click 'Query' : can someone explain why it's returning the entire document from the $match query+
[19:00:22] <aster1sk> If I remove the match it returns issue_views -> 45
[19:00:28] <aster1sk> But with the match everything breaks.
[19:05:30] <_m> Anyone have a sample query which returns a subset of fields? Using the documentation's example doesn't seem to work.
[19:05:43] <_m> From the docs: coll.find("_id" => id, :fields => ["name", "type"]).to_a
[19:06:57] <crudson> _m: ruby will mash those together into a single hash, you need to: col.find({:_id:id}, {:fields=>[:name,:type]})
[19:07:13] <crudson> first is the query, second is options
[19:08:03] <crudson> ignore me using symbols instead of strings, doesn't matter, just a habit of mine
[19:10:09] <_m> crudson: Does matter, just not to mongo. =P
[19:10:42] <_m> crudson: Thanks, btw. documentation led me to believe they were supposed to be in the same hash. Admittedly, that seemed really strange.
[19:11:52] <crudson> right, not when querying, just don't use symbols when referencing document keys in results. Yeah, it's the kind of thing you can stare at for a while and not see what's wrong :)
[19:23:38] <mobiltt> I have a question though, if anyone wouldn't mind lending a hand
[19:25:05] <crudson> mobiltt: just ask the question, if someone can help they will try
[19:25:38] <mobiltt> I have an application running alright on mysql. But I have one table that I use as a kv store. I like mongo a little better for many reasons. So, I'm offloading that table to mongo collection. I'm coming to a situation where i need to associate data in the mongodb and in a number of mysql tables as well.
[19:26:49] <mobiltt> Has anyone had any experience, associating data, like "select ids from table where ids between 1 and 100" and then doing a mongodb find() on those 100 ids
[19:30:46] <mobiltt> That's of course an example situation. I store random data including some user data in the mongo kv store, but need to "join" that with tables in my mysql database. I know this is not ideal. But could anyone point me in the correct direction here?
[19:42:03] <crudson> mobiltt: you're going to have to stitch those rows/documents together yourself. Depends how they are going to be used or offloaded. I have one (note: read-only) app that is hybrid mysql/mongodb that combines data from both in realtime. It works fine, but I'd think very carefully as to whether this is a sensible road to start down.
[19:42:36] <crudson> mobiltt: if using rails you could theoretically combine ActiveRecord with MongoMapper if you want to have "associations", but you'll have to define those manually
[19:43:44] <crudson> it's an architectural problem and as such hard to answer simply
[19:45:43] <mobiltt> crudson: thanks for your answer
[19:49:07] <mobiltt> crudson: I agree with you that it is firstly an architectural problem. I'm going to attempt to further remove the need for associations across database systems. I just wanted to know if anyone has had similar experiences, and what was the eventual outcome. A new smart db driver, or other changes..
[19:56:39] <crudson> mobiltt: I did it with bespoke application code, with parts abstracted to do the stitching between them. Again, a fuzzy answer.
[20:02:42] <bhosie> i have a collection that i'm trying to use '$addToSet $each' to append multiple sub docs to an array using the php driver. When i run the update, my new subdoc gets wrapped in another doc called {'$each' : ....} if i perform the update in the shell, it works as expected. i'm not sure if this is a bug or my syntax is botched.. http://pastebin.com/YyU8G7rU
[20:15:34] <crudson> bhosie: I don't use php, but is $value definitely an array?
[20:18:09] <bhosie> here's a var_dump() http://pastebin.com/YtWu4W0S
[20:25:41] <bhosie> is there a way to pass a json string straight through to the server? since it works on the shell, something like that might be a good workaround
[20:30:25] <crudson> bhosie: again, not a php user, but that looks like a single object (although the last paren is mismatched). Also, you care about uniqueness? if not does $pushAll work? What shell command works, paste that.
[20:36:13] <bhosie> crudson: ah i missed copying the first line. but yeah, that's an array - http://pastebin.com/MGc14ycS - uniqueness is necessary.... here's the working query http://pastebin.com/2xJXy33V
[20:39:51] <bhosie> oops. pasted the wrong working command. one sec
[20:39:55] <crudson> bhosie: the 'each' doesn't appear in your js one
[20:54:46] <bhosie> so yeah, that command on the shell works fine
[20:58:52] <crudson> bhosie: not sure :( Is that trailing comma at the end of the update command valid? I'd hope someone with php mongo api familiarity could help, I can't really spend too much time going through it. Can you at least inline something simple like [{'testtesttest': 'hello world'}]. Try and create the simplest php example that will work then start using your variables.
[21:01:08] <bhosie> crudson: yeah thanks for at least the sanity check and the time. before initially posting, i did narrow it down specifically to trying to push subdocs into the array. passing a simple array through the 'each' command like array('foo', 'bar', 'baz') works fine
[21:02:02] <bhosie> i'm thinking more and more this is a bug with the php driver.... just wanted some confirmation before submitting it through Jira
[21:04:55] <bhosie> an alternative would be to use the db->execute() function. but that causes a write lock :(
[21:10:50] <LouisT> would it be stupid to do: if ($col->count($obj) > 0) { $data = $col->findOne($obj); ... }
[21:12:42] <crudson> LouisT: yes it would be, count performance with attributes is a current issue (https://jira.mongodb.org/browse/SERVER-1752). findOne() will use index correctly.
[21:16:26] <LouisT> crudson: findOne can use $or correct?
[21:17:50] <crudson> LouisT: anything you can pass to find, it will just limit to 1 and return null if no match. Type db.collection.findOne - with no parens in the shell to see what it does
[21:18:27] <LouisT> oh ok, had no idea shell did that
[21:19:04] <crudson> LouisT: it will print source for any js method - helpful for things like your question
[21:24:48] <crudson> LouisT: the key is it simply sets 'limit' to -1, which means cursor will be closed after '1' is read
[21:30:36] <cjhanks> Is there an up to date git repo of just the Mongo Cpp driver?
[21:45:52] <piotr> what can I do with a cursor that timeouts?
[21:46:03] <piotr> I am in a python loop processing a collection
[22:06:16] <jawr> hey, if i have information like this: http://pastie.org/5037747 what would be more efficient: to store it all in one collection and then have the program parse it (i.e. into groups of day/week/months) or have seperate collections which would allow me to pull the information straight out of the database, but would mean duplicated data?
[22:07:46] <cjhanks> timeturner: I think its 32 bit since same epoch which is also 1970, but 64 bit doesn't have the Year 2038 problem.
[22:14:16] <timeturner> so it stores it as 64 bit timestamp representation which prevents the year 2038 problem if stored as a BSON Date type. otherwise it stores it as the regular 32 bit unix timestamp if stored as a number which does not prevent the year 2038 problem