[03:25:02] <jonjon> I'm really new to NoSQL. Can I get some quick opinion on this? http://pastebin.com/crGF5CV5
[05:39:08] <racarter> hello, my java mongodb client is using this connection string: mongodb://127.0.0.1:20000/mydb, I have a fresh mongod server running on that port but when I start my client I get this message: couldn't connect to [localhost/127.0.0.1:27017] bc:java.net.ConnectException: Connection refused
[05:39:13] <racarter> is my connection string malformed?
[06:50:30] <imsc> racarter: on which platform you have installed?
[06:51:46] <racarter> imsc: mac. I actually figured it out. the connection string had an extra slash. it had to be mongodb:/127.0.0.1:2000/mydb instead of mongodb://127.0.0.1:2000/mydb
[08:14:17] <tadasZ> Good morning everyone! strange thing happened today - I was doing some stuff with website I'm building, and from time to time I check my collections using RockMongo (or console), I inserted some records and tried to check them in RockMongo and they are not there, then tried using console and they are still not there, but my website is returning saved documents as if nothing happened
[08:14:47] <tadasZ> and saving new ones without any problems
[08:15:12] <tadasZ> but when i do db.mycollection.find(); it returns empty results
[10:15:26] <[AD]Turbo> Mon Jul 15 12:07:17.083 [conn183] warning: No such role, "userAdminAnyDatabase", in database mytestdb. No privileges will be acquired from this role
[10:26:45] <[AD]Turbo> mongo shell (that runs ok), is executed on the same machine where mongod runs, external admin tool in on another machine (inside the same LAN)
[10:27:28] <rspijker> well, that makes sense. That's the localhost exception
[10:44:08] <richthegeek> I'm having some trouble with data not being found in some test units... not sure what's going on
[10:45:01] <richthegeek> using the Node driver, with write concern set to 1, after inserted the data {_id: 1, name: "bob"}, a separate connection then queries for {_id: 1} and finds nothing. It then queries again (some condition) and finds the inserted row.
[10:45:33] <richthegeek> any idea what is going on?
[10:48:05] <richthegeek> nope, no replica or anything
[10:48:31] <richthegeek> this is with essentially the default configuration for mongo 2.4.5
[10:49:28] <Derick> mhmm, sounds odd, I don't know how your code works of course, but do remember that node does lots of things asynchronously
[10:49:52] <richthegeek> yeah, this is using the async library to perform a series of steps in the test
[10:49:58] <richthegeek> one of the steps being "insert data"
[10:50:05] <richthegeek> which doesnt move on until that's done
[10:50:22] <richthegeek> inside the "insert data" step, there's a serial async array - one callback for each row inserted
[10:51:58] <richthegeek> oh wait i've figured it out... the data was a "join" operation so it was inserting stuff from the "left" side before the "right" side, and then finding rows in the right before they were properly inserted
[10:52:13] <richthegeek> the async callback being diverted because the processor is a daemon op
[11:22:38] <Derick> (that doesn't mean it didn't happen)
[11:23:45] <remonvv> I understand ;) We basically invoked 9 addShard commands and out of nowhere it started complaining about the error mentioned in that ticket, again.
[11:24:23] <remonvv> Oh well. I'll collect all logs and such and open a new issue.
[11:36:06] <remonvv> We're scale up and down constantly so it's likely we run into this sort of thing more often if it is still an issue.
[11:36:22] <remonvv> Every TV show we run we go from 1-2 shards to upwards of 20 for some shows.
[12:47:54] <greenmang0> hello all, we just cleaned a lot of data from our production mongo replica set. we were expecting it would free up a huge disk space, but it didn't, after looking up mongodocs we realized that we can't clean it up - http://docs.mongodb.org/manual/reference/method/db.repairDatabase/#db.repairDatabase , so my question is if there any way we can free up the disk space"
[12:52:48] <Derick> greenmang0: i think compact is what you want
[12:54:41] <Infin1ty> the best way is to resync, it seems faster then repair
[12:55:22] <Infin1ty> if you go the repairpath and you don't have enough freespace, you can mount an nfs mount or additional storage and use repairpath so it'll store temp files there
[12:55:24] <Derick> yeah, but he wants to compact, not repair ;-)
[12:56:43] <Infin1ty> I am running this procedure now over 7 shards, got tons of freespace to claim :), i just resync, seems better
[12:57:59] <h0bbit> Derick, Infin1ty the compact docs say that it should be run on a collection. what if I've dropped a large collection and want to reclaim the space?
[12:58:14] <Infin1ty> "compact requires a small amount of additional disk space while running but unlike repairDatabase it does not free space on the file system."
[12:58:36] <Infin1ty> seems like either repair or resync, i prefer the resync path
[12:59:04] <h0bbit> Infin1ty, I think I agree with you on this. resync seems safer.
[13:02:36] <remonvv> Derick, do you know what the process is to report an issue that involves logs that cannot be public?
[13:02:47] <remonvv> "Community Private" does not seem to allow opening normal tickets
[13:03:48] <remonvv> Derick, this isn't support. And I've noticed someone referring to Community Private as a way to do so for public issues (e.g. report issue public, but keep logs private)
[13:03:53] <Foxandxss> Hi, I said no to me. I am doing 10gen course and I really need a hint, just 1 ;( :P
[13:07:09] <Infin1ty> the slowest thing in the resync/repair is building the index after, seems that is sort twice or something, it takes ages on large collections, it's seems as it's only single-threaded
[13:07:37] <Infin1ty> it takes me around 6 hours to replicate a large collection, then the index building process (just one index, _id) takes almost 2 days
[13:08:52] <Foxandxss> So, if limit a query to 30 returns 30 but scan 40 and no limit returns and scan 40. How to create an index that when limit to 30, just scan 30? Hints welcome! :P
[13:26:34] <rspijker> Foxandxss: any proper index for the search should accomplish that...
[13:27:22] <greenmang0> Derick: Infin1ty - thanks, I will go with the resync
[13:29:34] <greenmang0> Infin1ty: what do you think about this approach - add one more member to replset, it will sync and will occupy less disk space, then make it a primary, then remove secodary members one by one, delete all the data on them, and add them back to replset
[14:20:38] <limpc> hi guys. im having trouble getting mongo to work with php. im using the apt repository (ubuntu) to install. everything installs fine, no errors, mongod is running, but when i try to connect to the db via php, it just hangs there, and there's no connection entry in the mongo log
[14:23:35] <Derick> limpc: show us the php script in a pastebin
[14:24:02] <Derick> limpc: and, which version of the php driver do you have installed (check the output of phpinfo())
[14:25:28] <limpc> ah i found the problem. was trying to connect to the standard mongo port
[14:25:37] <limpc> im using non-standard ports and forgot to change that in my framework config
[14:33:49] <kevinsimper> Should you always check if a key inside a document exist because you cant be sure? Or should you do a db.col.find().foreach and ensure all docs has a least a value?
[14:34:24] <kevinsimper> Like if a made a new function which count pageviews
[14:34:54] <Nodex> you can select where $exists : false
[14:38:59] <kevinsimper> Now you made a comment sections, so now when you create new documents they have "title", "content", "comments", "comments_count"
[14:39:01] <Nodex> and you want to add pageviews to the list of fields?
[14:39:29] <Nodex> well OLD documents wont have the comments nor count so why waste space adding them as keys?
[14:39:32] <kevinsimper> You now have 100 old documents, which does not have the "comments_count" field.
[14:53:18] <rspijker> space considerations are also somewhat relative… what happens when things are moved/updated, you get fragmentation, etc. Especially when the amount of old documents is relatively low I would just update them...
[14:53:51] <kevinsimper> rspijker: Okay, that was just was i was looking for :-) In MySQL I would just set the default value to zero in the schema, but i know MongoDB is different.
[14:54:00] <rspijker> it will improve the readability of your code, make things easier to manage, with (imo) very little downside
[14:54:36] <kevinsimper> So you always consider it, you dont have a rule like?
[14:54:56] <rspijker> I don't think it is a question you can answer generally
[14:55:08] <kevinsimper> So if it was 1 million documents, you would not add a unnessary zero
[14:55:31] <kevinsimper> Because you could assume it would be zero, because the value is undefined
[14:56:06] <rspijker> it depends heavily on what is important for you. If you need every byte and things aren't going to change much, do some checks in code. If space isn't as much of an issue for you and you prefer code readability, add the field...
[14:56:25] <Nodex> boooooom back to my original answer
[14:56:33] <kevinsimper> Or would you? I am just interested, because from MySQL where i am coming from it was not something i had to deal with, because it was just how it was with a schema
[14:57:38] <rspijker> for 1M docs I would gladly add a 0 in my current use cases
[14:57:38] <limpc> i cant seem to get a db.insert to work. what am i doing wrong? http://pastebin.com/Y817j7er
[14:58:22] <Derick> auth is reserved I think, try:
[15:00:25] <lxnay> hi, i have a compilation problem with mongodb-2.4.5 on i686
[15:00:53] <lxnay> src/mongo/client/sasl_client_session.cpp: In function 'mongo::Status mongo::{anonymous}::_mongoInitializerFunction_CyrusSaslAllocatorsAndMutexes(mongo::InitializerContext*)':
[15:00:56] <lxnay> src/mongo/client/sasl_client_session.cpp:77:28: error: invalid conversion from 'void* (*)(long unsigned int)' to 'void* (*)(size_t) {aka void* (*)(unsigned int)}' [-fpermissive]
[15:01:35] <lxnay> it looks like size_t is defined as unsigned int while somebody thought it's always long unsigned int...
[15:06:11] <limpc> oddly enough though, changing the collection name did solve the problem
[15:06:17] <limpc> so 'auth' is an undocumented restriction?
[15:22:42] <JoeC1> If I was to use a mongo collection to store session data, after issuing an update to a record, can I reasonably assume it will be available for immediate read?
[15:27:11] <Nodex> it is however a function for it
[15:28:06] <rspijker> exactly, it's a function. You can't do function.insert(…), which is why it fails
[15:28:24] <rspijker> has nothing to do with it wrapping system auth or not
[15:30:04] <rspijker> try doing db.eval.insert(…) or db.getName.insert(…). You'll get the exact same error ;)
[15:41:26] <dllama> hi guys, i'm fairly new to mongo, pretty sure that when i started it up, i only ran mongod. i didnt' specify a local folder or anything.. I'd like to restart mongo and reload it properly, with a full path to the db folder and of course maintain all my documents, etc. whats the safest way to do that?
[15:44:28] <Striki> do you have any points why 1 mongos (out of 30-40) is the source of almost all setShardVersion requests?
[15:44:36] <rspijker> just turning off mongos, moving the files to your desired dbpath and starting mongos up again with the correct dbpath should be fine, dllama
[15:45:28] <Nodex> dllama : did you create a mongodb config / init script?
[15:45:54] <Nodex> did you install it as a package?
[15:45:56] <dllama> if i'm not mistaken, i think i may have just done apt-get install and then just ran it
[15:46:09] <rspijker> dllama: if you want no downtime, you can start a second mongod (with correct dbpath), configure it as secondary, wait till it syncs, then step down the original master
[15:46:13] <Nodex> right, you will have a mongodb.conf... probably in /etc/
[15:46:42] <Nodex> edit that and it will show you the defaults it uses, change what you need, save it and do `/etc/init.d/mongodb restart`
[15:47:17] <rspijker> how will that migrate his current documents exactly?
[15:47:39] <dllama> I dumped them to a local folder,
[15:48:07] <Nodex> you can just copy the old ones if you need to
[15:49:28] <dllama> from the running daemon, I dumped them to a folder in my home directory. mongo powers a cms that i'm using and my initial upgrade of the CMS failed to migrate the DB properly, forcing me to essentailly rewrite the entire site (mostly copy/paste but was still a tedious process). so i'm actually pretty nervous about this procedure
[15:49:37] <dllama> would absolutely hate to go through that again
[15:49:59] <Nodex> you wont have lost your data. Changing values in the config does not erase the db's
[15:50:34] <Striki> do you know how I can find out what servers owns a certain serverId? the serverId is an ObjectId
[16:38:28] <Nodex> bson = binary json - 10gen's kinda db backend language I suppose
[16:38:37] <dllama> hmm, if my rails app is showing that as an error, would that a configuration error with the db or something really specific to the app?
[16:41:21] <dllama> I can gist the entire error/trace
[16:41:54] <dllama> i just dont even know where to turn with this, either #rubyonrails or here, since the developers irc room has 3 people in it which never seem to answer :/
[16:42:38] <Nodex> the only time you should even see those words (bson dump) is when backing up or restoring a mongodb
[17:07:51] <limpc> so you HAVE to scale to achieve concurrency
[17:08:12] <limpc> ok well that doesnt exactly solve the issue ;)
[17:08:36] <Nodex> tbh your bottleneck won't be mongodb anyway so it prbably doens't apply to you
[17:08:37] <limpc> but collection is definitely an improvement
[17:10:16] <Nodex> dllama : I don't know what to suggest, perhaps ask the creator of the CMS?
[17:10:32] <limpc> Nodex: hmm, mongo docs say locks are currently database, not collection
[17:10:58] <dllama> Thanks Nodex, seems like thats my best bet @ this point, just wish they actually were available in the irc room they advertise… i'd be furious if this were a commercial product :/
[17:11:36] <drag> Hi there, if I query mongodb with a BSON that like: {_id=,name=Joe}, while it ignore the empty _id value or will that cause an error?
[17:11:39] <dllama> went as far as adding every name/mention from their git pages on skype, and nobody has yet accepted my ivite lol
[17:13:24] <Nodex> not my day today, I meant database LOL, time to give up helping for the day
[17:13:54] <Nodex> my point still stands, mongo will not be your bottleneck for sessions, your php framework will melt before that happens
[17:17:00] <drag> Hi there, if I query mongodb with a BSON that like: {_id=,name=Joe}, will mongo ignore the empty _id value or will that cause an error?
[17:17:23] <Nodex> on the shell or in your driver?
[17:47:40] <remonvv> In so much that the field filter does not work with the positional operator.
[17:48:10] <remonvv> If the distributions array can be large and your requirement is to only fetch one you may want to rethink your schema.
[17:48:17] <silasdavis> remonvv, is there a comparable projection that would work
[17:48:28] <remonvv> Meaning either do a document per distribution, or make the distribution key the field name (not recommended)
[17:48:36] <mrapple> in a map/reduce, i'm creating an array with a lot of values, with duplicates possible. should i remove duplicates in the reduce or finalize stage?
[17:49:08] <silasdavis> remonvv, ah it's not really a size issue, it's just for convenience at the command line and I wanted to project out the elements before doing a
[17:49:13] <remonvv> Not really. Field filtering is just not meant to do a lot more than exclude fields. It is not meant to be selective about which array elements ought to be returned.
[17:49:30] <silasdavis> would the same projection work in an update context
[17:49:36] <remonvv> silasdavis, I see. Well if it's purely shell you can do it in JS.
[17:49:43] <silasdavis> if I wanted to update the first matching name?
[17:50:01] <silasdavis> remonvv, yes I'll do that now I understand the scope of find, thanks
[17:50:23] <remonvv> You can use the positional operator exactly as you are doing now in an update.
[17:51:13] <remonvv> So, for example : db.licenses.update({ "distributions.key": "b41f7c82-756b-43c8-9435-15cff7498d15",}, {$set:{"distributions.$.name": "HI!"}}) works just fine
[17:51:52] <remonvv> Also have a look at $elemMatch for a somewhat more intuitive element matching if you're matching on more than one array element field.
[17:56:17] <remonvv> There are a lot of subtleties you'd miss if you don't
[18:18:32] <mrapple> anyone know why i can't get any debug output for M/R finalize to show up?
[18:19:49] <mrapple> just using print() and printjson()
[18:19:56] <mrapple> it will show up in the map or reduce part, but not finalize
[19:51:20] <idlemichael> i have a mongo collection that's about ~120GB large. i'm trying to export it to another mongo instance on a different server -- every time i import data from the old server to the new server, mongo seems to re-do the entire index. is that standard? is there a way to background the _id index creation, is that safe?
[19:54:08] <awpti> Could use some advice, doing a little toy project to learn Mongo better. Is this way of storing this data going to bite me in the rear? http://pastie.org/private/lxrgkrroig8jzgfvsrbg
[20:16:24] <Chammmmm> Hi, I had to drop a collection due of a bug in 2.4.2 for corrupted indexes.. After doing a db repair I have been able to drop it.. but not the collection is still showing up in the shard status.. with chunks distributed and everything..
[20:16:50] <Chammmmm> Will recreating the collection and reshard it fix it? or I actually need to hack the config database
[20:17:24] <Chammmmm> and I sure dont want to do the second option :)
[20:25:14] <aandy> hi guys, i'm in a bit of a jam. i have a fairly simple replica set of three nodes. running rs.status() shows ALL nodes as secondary (no primary). so obviously i can't reconf to change the priority, or step down. i've tried freezing two of them, but the remaining doesn't seem to attempt to become primary. do i need to set up an arbeiter or can i solve it easier?
[20:29:14] <aandy> i'm afraid i'll need another hint. it's been running like this for about a month, and with heartbeats (and low latency), so elections aren't run (successfully anyway)
[20:29:21] <retran> there's a few senerios under which your nodes could refuse to elect a primary
[20:31:46] <aandy> oh wait, i may have confused lastheartbeat with lastheartbeatrecv. here's my conf + status: http://pastie.org/private/ubn5lr1ytix3dmo8vji7ag
[20:34:32] <aandy> yeah that helped. turns out one of the nodes could not reach two of the others (short story: tunnel mixup). thanks, retran :)
[20:36:33] <retran> cool glad it worked. i guess your problem fell under network partition
[20:39:17] <awpti> Okay, I can't figure this out and the docs don't go into it.. how do I remove an item from a doc in a doc? http://pastie.org/8143662 (I want to get rid of created_time from login_data)
[20:44:02] <retran> awpti, serialize a new object based on existing object minus the field
[20:44:45] <retran> since you have the added copmlexity of recursive doc, you would possibly need to control for concurency in a transaction
[20:45:26] <retran> let me make sure that's the best way, one min
[21:29:32] <idlemichael> i think it's time to try it out
[21:29:52] <retran> the _id is based on time, but if it finds there's another inserted at same time it does some other magic to it to make it different
[21:30:30] <retran> i guess that would make your inserts potentially perform table scans just to complete an insert
[21:33:48] <jblack> wow, that google think is pretty awesome, no?
[21:34:03] <retran> nothing about calculating hashes
[21:34:16] <jblack> Yup. I was completely off on that. =)
[21:34:50] <retran> anyway, that doesn't change that having _id be background indexed would be a weird idea
[21:35:09] <retran> since it could potentilally cause table scan (negating the speed of insert gain from background indexing)
[21:35:21] <retran> it would either have to have 'caught up' or do a table scan
[21:35:31] <idlemichael> retran: the reason i'm interested in this is -- i have a ~120GB mongo collection that I'm trying to replicate to another box. everytime i do a mongoexport --host remote.host | mongoimport it re-indexes the _id, and blocks
[21:35:36] <idlemichael> and it takes hours and hours
[21:35:37] <jblack> I don't believe it's indexed in the "background". I believe there's an app wide write lock
[21:35:43] <idlemichael> which causes the new data set to become stale
[21:35:58] <retran> he's asking if it could be set to background index
[21:41:48] <idlemichael> retran: mongoexport seems fine, may be it's not suitable -- i just assumed 10gen would provide suitable tools to work with their data
[21:42:15] <idlemichael> retran: mongoexport doesn't seem to trigger the reindex either, it seems to happen from within mongod
[21:43:01] <retran> have you replicated other kinds of databases when they're that large
[21:43:23] <retran> i've done so in mysql, and mysql's equivelent isn't suitable for such large data sets
[21:43:30] <idlemichael> retran: i haven't. first time :<
[21:43:57] <idlemichael> indeed retran; i tried doing the replicaset method and having a slave catch up to the master, but it never seems to catch up
[21:43:58] <Chammmmm> Question.. if I stop the balancer.. are there other ways that the config servers meta data can be modified? I want to know if I can restart the config server after a dump.. while the rest of mongo is being snapshotted
[21:44:18] <retran> yeah that sounds SO simliar to issues in mysql replication on large data
[21:44:39] <retran> i just tarballed the data directory
[21:45:07] <retran> i havn't impelmented a replication set yet in mongodb, but i have migrated a mongodb server's data via tarball of data dir
[21:46:29] <retran> if you have the luxery of stopping your primary during the process, even better
[21:46:46] <retran> well, you'd have to stop it actually for copying the data dir
[21:47:02] <idlemichael> retran: that's the problem. primary is production right now -- stopping it is almost out of the question, but it might come to that
[21:47:34] <retran> depends how much drama you can afford trying to use the migration tools
[21:47:54] <idlemichael> right now it's more affordable than stopping the production db ;)
[21:49:15] <idlemichael> "If you want to simply copy a database or collection from one instance to another, consider using the copydb, clone, or cloneCollection commands, which may be more suited to this task. The mongo shell provides the db.copyDatabase() method."
[22:39:57] <jblack> If you look up the examples at mongoid.org (http://mongoid.org/en/mongoid/docs/querying.html), they're clearly passing in a type String.
[22:40:35] <jblack> perhaps if you did a find_by( id: -objectidhere-) ?
[22:45:32] <crudson> hahuang65: IDs don't have to be ObjectID. For you two examples to be compared to each other you should be doing ObjectID.from_string('abc'), as that is what should check whether it's correct.
[22:46:20] <hahuang65> crudson: but in the old version of mongoid, it would automatically do that for me. Now I have to do it myself?
[22:47:25] <hahuang65> crudson: in Mongoid 2.x, if you do a find where it expects an ID, and that's not a valid format, it would throw InvalidObjectId… now it doesn't. You're saying that I have to check that myself before I pass it to .find or any other query where I use an ObjectId?
[22:47:54] <idlemichael> retran: db.cloneCollection seems to be doing the trick -- i can query and access the db
[22:50:38] <crudson> hahuang65: have you overridden _id's type in the model? If not, how about? field :_id, type: Moped::BSON::ObjectId
[22:51:08] <hahuang65> crudson: I haven't. Let me try that.
[22:53:19] <hahuang65> crudson: ah… looks like my _id is of type Mongoid::Fields::Standard
[22:53:24] <hahuang65> crudson: so your solution might work.
[22:53:40] <crudson> hahuang65: just remove that declaration
[22:54:07] <hahuang65> crudson: I don't think that's declared anywhere… Mongoid::Documents should handle it no?
[22:54:43] <crudson> hahuang65: I misread your last msg sorry
[22:55:56] <hahuang65> crudson: hmm yeah adding your line to the model didn't help.
[22:59:04] <hahuang65> crudson: looking at the code in mongoid, where it tries to call .mongoize(id) it's calling that on Mongoid::Fields::Standard, when I'm assuming it should be Moped::BSON::ObjectId
[23:01:18] <hahuang65> crudson: never mind, Mongoid::Fields::Standard delegates :mongoize to it's type. Lemme see where that leads.
[23:01:30] <crudson> yeah you need to examine .type on that
[23:05:37] <kevino> does anyone know what version replSetMaintenance was added?
[23:09:50] <hahuang65> crudson: thanks for your help. I've found the cause. It seems like an oversight on their part… ping me later if you want details.
[23:22:40] <poet> I am attempting to call rs.initiate() from the mongo shell and I get an error "no such cmd: replSetInitiate"
[23:48:06] <tyl0r> If you stop/shutdown mongodb properly will it flush to disk immediately? ie. Can I start taking a snapshot as soon as the shutdown returns? Or should I run db.rsyncLock() before shutting down? I'm having issues using fsyncLock/Unlock with authentication so I'd prefer not to use that
[23:49:13] <retran> dont know, i've always had good luck with my datadir snapshots, and i never wait any period of time or do have a particular command to run before the shutdown